Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Intelligence, shaped by you. Powered by Solana.
Aera is an open-source platform that empowers you to create, deploy, and manage AI-native applications with ease. Designed for integration with your existing tools and data, Aera brings the power of LLMs (AI/Large Language Models) to your fingertips - 0 coding required.
No Code Interface: Design complex AI workflows using a drag-and-drop interface.
Custom Logic: Implement conditional flows and multi step processes tailored to your needs.
Real Time Testing: Instantly preview and test your workflows within the platform.
Document Integration: Upload PDFs, slides, or web pages to provide context to your AI applications.
Dynamic Responses: Generate AI outputs that reference your specific documents and data sources.
Knowledge Management: Maintain a centralized knowledge base for consistent and accurate AI interactions.
Third-Party Tools: Connect with platforms like Notion, Slack, Google Drive, and more.
API Connectivity: Utilize built-in API nodes to integrate with external services and databases.
Web3 Compatibility: Incorporate on-chain data and blockchain interactions into your AI workflows.
Full Control: Deploy Aera on your own infrastructure to maintain control over your data and operations.
Community Driven: Benefit from a growing community of contributors and shared resources.
Extensibility: Customize and extend the platform to fit your unique requirements.
Aera simplifies the process of building intelligent, context-aware AI applications. Whether you're a developer, entrepreneur, or part of a larger organization, Aera provides the tools you need to harness the capabilities of LLMs and integrate them into your daily workflows.
For more information and to get started, visit our or explore our documentation.

A gateway for everyone to build powerful AI applications.
Aera for Solana dApps
Built with the future of the internet in mind, Aera is particularly well-suited for projects integrating with ecosystems like Solana to build large-scale systems and automations such as:
REST APIs & Management UI: Easily connect to both Web2 and Web3 environments.
Low-Code Workflow Builder: Design complex logic without writing extensive code.
Modular AI Agent Framework: Perfect for crypto-specific workflows and decentralized coordination.
RAG Capabilities: Simplify high-performance retrieval-augmented generation.
Prompt Orchestration: Fine-tune prompt logic via a user-friendly interface.
Mainstream LLM Support: Integrate both open and proprietary language models effortlessly.
Key Benefits
Privacy, Transparency, User Control: Aligns with Web3 principles without sacrificing usability.
Token-Gated Experiences: Supports advanced blockchain-based access.
Aera is designed for scalability and ease, supporting technical and non-technical users in Solana and other ecosystems. It offers tools from prototype to production, providing a frictionless path from concept to deployment without needing code. Ideal for organizations aiming to automate or individuals crafting AI agents, Aera is the technical foundation you need for next
Aera’s Knowledge feature visualises each stage of the RAG pipeline, providing a user-friendly UI for application builders to easily manage personal or team knowledge.
Developers can upload internal company documents, FAQs, and standard working guides, then process them into structured data that large language models (LLMs) can query.
Compared with the static pre-trained datasets built into AI models, the content in a knowledge base can be updated in real time, ensuring LLMs always have access to the latest information and helping avoid issues caused by outdated or missing data.
When an LLM receives a user query, it first uses keywords to search within the knowledge base. Based on those keywords, the knowledge base returns content chunks with high relevance rankings, giving the LLM crucial context to generate more precise answers.
This approach ensures LLMs do not rely solely on pre-trained knowledge. Instead, they can also draw from real-time documents and databases, enhancing both the accuracy and relevance of responses.
Real-Time Updates: The knowledge base can be updated anytime, ensuring the model always has the latest information.
Precision: By retrieving relevant documents, the LLM can ground its answers in actual information, reducing hallucinations.
Flexibility: Developers can customise the knowledge base content to match specific needs, defining the scope of knowledge as required.
You only need to prepare text content, such as:
Long text content (TXT, Markdown, DOCX, HTML, JSONL, or even PDF files)
Structured data (CSV, Excel, etc.)
Online data sources (Web pages, Notion, etc.)
By simply uploading files to the Knowledge Base, data processing is handled automatically.
If your team already has an independent knowledge base, you can use the “Connect to an External Knowledge Base” feature to establish its connection with Aera.
Imagine you are building a customer support assistant for a Web3 project based on Solana. Your team regularly updates its whitepapers, development progress, and frequently asked questions. You can upload those documents to the Knowledge Base in Aera, allowing the assistant to draw real-time, relevant data about Solana’s blockchain, tokenomics, and related updates.
Traditionally, developing and maintaining a support chatbot for such a niche, fast-evolving ecosystem could be time-consuming. But in Aera, this process is streamlined, allowing you to have the system up and running in minutes. With constant updates to your knowledge base, your AI support assistant will always be equipped with the most current and accurate information to answer user queries related to Solana, cryptocurrency trends, or Web3 development.
In Aera, a Knowledge Base is a collection of Documents, each of which can include multiple Chunks of content. You can integrate an entire knowledge base into an application to serve as a retrieval context, drawing from uploaded files or data synchronised from other sources.
If your team already has an independent, external knowledge base that is separate from the Aera platform, you can link it using the External Knowledge Base feature. This way, you do not need to re-upload all your content to Aera. Your AI app can directly access and process information in real time from your team’s existing knowledge.
Let me know if this works for you!
The majority of $AERA supply is circulating. At launch, most tokens will be/were allocated to the community to promote broad ownership and encourage decentralized involvement. This approach ensures a level field, giving everyone equal access to participate and contribute to the growth of the ecosystem.
Our Vesting Plan (burns, etc). Aera will have a dynamic vesting plan, lasting over 12 months. Each month a small, small % of supply will be drip fed back for project improvements.
Project Fund. The remaining $AERA is/will be strategically allocated towards Aera's project fund which will enhance bounties, app improvements and marketing.
Solana Contract Address: HhwnFgEoiuANFRUq8kpEPujb3MAr2uJboWLbUV8mpumpAera provides multiple ways to get started with building your AI application. Whether you're prototyping a chatbot, setting up a workflow, or experimenting with agents.
To begin using Aera, you’ll first need to configure a model provider. This enables your applications to interact with large language models (LLMs) for tasks such as chat, embedding, or speech-to-text.
Check if your question is here before asking!
What can I actually build with Aera?
From chatbots for your website to internal tools and personal AI assistants - with Aera you can fully build custom AI workflows using real data and logic.
Do I need to know how to code?
Nope. Aera’s visual builder and plug-and-play tools make it easy to create AI flows without writing a single line of code.
What makes Aera different from a normal chatbot (e.g ChatGPT)?
Aera is built for creation and utility, not just conversation. You can design logic, connect tools, work with documents, and even deploy AI apps.
Is Aera open-source?
Yes, Aera is open-source. We're built with transparency in mind. Aera is also self-hostable, so you can keep everything local or fully private.
Can I collaborate with a team?
Of course. Aera supports multiple workspaces, role permissions, and shared editing for teams building together.
Does Aera support image or voice generation?
Yes. You can connect Aera to DALL·E, Stable Diffusion, and even use text-to-speech thanks to Agora to make your AI voice-ready.
Select Create from Template.
Browse available templates and choose one that matches your goal.
Click Use this Template to begin customising it.
Templates can be edited to suit your specific workflow, and provide a great foundation for learning how applications are structured in Aera.
More experienced users, or those with specific requirements, can create an application from a blank slate.
To do this:
Navigate to Studio.
Select Create from Blank.
You'll then choose an application type. Aera supports five core types:
Chatbot - for conversational interfaces.
Text Generator - for generating structured or unstructured text.
Agent - for autonomous or multi-step actions.
Chatflow - for rule-based or conditional chat logic.
Workflow - for low-code task orchestration.
When creating an application, you can define its name, add a description, and assign an icon. These settings help collaborators quickly understand the purpose of the app and keep your workspace organised.
Each application type in Aera serves a different function:
Chatbots are ideal for real-time Q&A and support scenarios.
Text Generators suit content creation, summarisation, or rewriting tasks.
Agents combine prompts, memory, and logic for autonomous AI operations.
Chatflows use branching logic to guide users through interactive flows.
Workflows enable multi-step logic and background processing using a low-code builder.
Enter the required API key or access credentials.
You can obtain these from your chosen provider’s official dashboard.
Save your configuration.
Once added, your selected models will be available for use in applications across Aera.
Aera supports a variety of model types depending on your application needs:
System Inference Models For general tasks such as chat, text generation, and summarisation.
Supported: OpenAI, Anthropic, Hugging Face, Replicate, Xinference, Ollama, LocalAI, and more.
Embedding Models Used for indexing knowledge bases and understanding user input.
Supported: OpenAI, ZHIPU, Jina AI.
Rerank Models Enhance search output relevance by reordering retrieved documents.
Supported: Cohere, Jina AI.
Speech-to-Text Models Convert audio input into text, useful in voice interfaces.
Supported: OpenAI.
Model providers in Aera fall into two categories:
These include platforms like OpenAI and Anthropic that offer full model suites.
Add the provider by entering your API key in Aera.
Once integrated, you can access all models offered by that provider.
These offer individual models hosted on third-party services like Hugging Face or Replicate.
Integrate models one by one depending on your use case.
Some providers may require additional configuration.
Useful links for integration:
Once added, your models are ready to be used in any application you create within Aera.
The Plugin is a developer-friendly and highly extensible third-party service extension module. While the Aera platform already includes numerous tools maintained by the Aera team, the existing tools may not fully meet the demands of various niche scenarios. Additionally, developing and integrating new tools into the Aera platform often requires a lengthy process.
The new plugin system goes beyond the limitations of the previous framework, offering richer and more powerful extension capabilities. It contains five distinct plugin types, each designed to solve well-defined scenarios, giving developers limitless freedom to customise and enhance Aera applications.
Additionally, the plugin system is designed to be easily shared. You can distribute your plugins via the Marketplace, GitHub, or as a Local file package. Other developers can quickly install these plugins and benefit from them.
Whether you're looking to integrate a new model or add a specialised tool to expand Aera's existing features, the robust plugin marketplace has the resources you need. We encourage more developers to join and help shape the Aera ecosystem, benefiting everyone involved.
Models These plugins integrate various AI models (including mainstream LLM providers and custom models) to handle configuration and requests for LLM APIs.
Tools Tools refer to third-party services that can be invoked by Chatflow, Workflow, or Agent-type applications. They provide a complete API implementation to enhance the capabilities of Aera applications. For example, developing a Solana blockchain plugin for querying on-chain data could be implemented through this type.
Agent Strategy The Agent Strategy plugin defines the reasoning and decision-making logic within an Agent node, including tool selection, invocation, and result processing.
Workflows reduce system complexity by breaking down complex tasks into smaller steps (nodes), decreasing reliance on prompt engineering and model inference capabilities.
Aera workflows are divided into two types:
Chatflow: Designed for conversational scenarios such as customer service, semantic search, and other applications that require multi-step logic in response construction.
Workflow: Geared towards automation and batch processing scenarios, ideal for high-quality translation, data analysis, content generation, email automation, and more.
To address the complexity of user intent recognition in natural language input, Chatflow provides question understanding nodes. Compared to Workflow, it adds support for chatbot features such as conversation history (Memory), annotated replies, and Answer nodes.
Workflow, on the other hand, offers a variety of logic nodes for handling complex business logic in automation and batch processing scenarios. These include code nodes, IF/ELSE nodes, template transformations, iteration nodes, and more. Additionally, Workflow provides capabilities for timed and event-triggered actions, enabling the construction of automated processes.
Start by building a workflow from scratch or use system templates to get started.
Familiarise yourself with basic operations such as creating nodes on the canvas, connecting and configuring nodes, debugging workflows, and viewing run history.
Save and publish a workflow.
Run the published application or call the workflow through an API.
Nodes are the key components of a workflow. By connecting nodes with different functionalities, you can execute a series of operations within the workflow. For core workflow nodes, please refer to Block Description.
Variables link the input and output of nodes within a workflow, enabling complex processing logic throughout the process. For more details, please refer to Variables.
Application Scenarios
Chatflow: Designed for conversational scenarios, such as customer service, semantic search, and other applications requiring multi-step logic in response construction.
Workflow: Geared towards automation and batch processing scenarios, suitable for high-quality translation, data analysis, content generation, email automation, and more.
Usage Entry Points
Differences in Available Nodes
The End node is the ending node for Workflow and can only be selected at the end of the process.
The Answer node is specific to Chatflow, used for streaming text output, and can be output at intermediate steps in the process.
Chatflow has built-in chat memory (Memory) for storing and passing multi-turn conversation history, which can be enabled in nodes like LLM and question classifiers. Workflow does not support Memory-related configurations.
Built-in variables for Chatflow’s start node include: sys.query



Extensions Lightweight plugins that only provide endpoint capabilities for simpler scenarios, enabling fast expansions via HTTP services. This approach is ideal for straightforward integrations requiring basic API invocation.
Bundle A "plugin bundle" is a collection of multiple plugins. Bundles allow you to install a curated set of plugins all at once; no more adding them one by one.

sys.filessys.conversation_idsys.user_idsys.filessys_id
Aera is a full-featured platform for building LLM applications, combining orchestration, retrieval, and agentic capabilities in a developer-friendly environment.
Model Support – Integrates with major LLM providers like OpenAI and Anthropic, plus supports local inference runtimes (e.g. Ollama, Xorbits) and OpenAI-compatible APIs.
Visual Orchestration – Drag-and-drop builder for prompts and workflows, including logic, HTTP requests, tools, and LLM steps—all editable in real time.
Flexible App Types – Create agents, chatbots, chatflows and text generators using built-in templates and orchestration modes.
Knowledge Retrieval (RAG) – Visual interface for managing indexed content, supporting hybrid search, recall tuning, and LLM-assisted relevance.
Agent Framework – Includes tool-based execution (ReAct, OpenAPI tools), over 40 built-in tools, and native code support for advanced logic.
Data & Integration – Built-in ETL for documents and Notion/web syncing; robust REST API access; team-based collaboration with moderation options.
Follow the steps below to set up the Aera platform on your local machine.
The backend requires some middleware, including PostgreSQL, Redis, and Weaviate, which can be started together using docker-compose.
.env.example to .envSECRET_KEY in the .env FileFor Linux:
For Mac:
Aera API service uses to manage dependencies. First, you need to add the Poetry shell plugin, if you don't have it already, in order to run in a virtual environment. [Note: Poetry shell is no longer a native command, so you need to install the Poetry plugin beforehand]
Then, you can execute poetry shell to activate the environment.
Before the first launch, migrate the database to the latest version.
Follow the instructions for setting up the frontend (see below) to start the web service.
Once the web service is running, visit http://localhost:3000 to begin setting up your application.
If you need to handle and debug async tasks (such as dataset importing or document indexing), start the worker service:
Before starting the web frontend service, ensure the following environment is ready:
Node.js >= v22.11.x
pnpm v10.x
First, install the dependencies:
Then, configure the environment variables. Create a file named .env.local in the current directory and copy the contents from .env.example. Modify the values of these environment variables according to your requirements:
For production release, change this to PRODUCTION:
The deployment edition:
The base URL of the console application, which refers to the Console base URL of the web service if the console domain is different from the API or web app domain:
The URL for the web app:
SENTRY configuration:
Finally, run the development server:
Visit http://localhost:3000 in your browser to see the result.
First, build the app for production:
Then, start the server:
If you want to customise the host and port:
If you want to customise the number of instances launched by PM2, you can configure PM2_INSTANCES in the docker-compose.yaml or Dockerfile.
cd ../docker
cp middleware.env.example middleware.env
# Change the profile to another vector database if you're not using Weaviate
docker compose -f docker-compose.middleware.yaml --profile weaviate -p aera up -d
cd ../apicp .env.example .envsed -i "/^SECRET_KEY=/c\SECRET_KEY=$(openssl rand -base64 42)" .envsecret_key=$(openssl rand -base64 42)
sed -i '' "/^SECRET_KEY=/c\\
SECRET_KEY=${secret_key}" .envpoetry self add poetry-plugin-shellpoetry env use 3.12
poetry installpoetry run python -m flask db upgradepoetry run python -m flask run --host 0.0.0.0 --port=5001 --debugpoetry run python -m celery -A app.celery worker -P gevent -c 1 --loglevel INFO -Q dataset,generation,mail,ops_trace,app_deletionpoetry install -C api --with devpoetry run -P api bash dev/pytest/pytest_all_tests.shpnpm installcp .env.example .env.localNEXT_PUBLIC_DEPLOY_ENV=DEVELOPMENTNEXT_PUBLIC_EDITION=SELF_HOSTEDNEXT_PUBLIC_API_PREFIX=http://localhost:5001/console/apiNEXT_PUBLIC_PUBLIC_API_PREFIX=http://localhost:5001/apiNEXT_PUBLIC_SENTRY_DSN=pnpm run devpnpm run buildpnpm run startpnpm run start --port=3001 --host=0.0.0.0Aera supports the following providers natively. You may need to install a extensions to use some of these providers.
OpenAI
Yes (Function Calling, Vision Support)
Yes
Yes
Yes
Anthropic
Yes (Function Calling)
Azure OpenAI
Yes (Function Calling, Vision Support)
Yes
Yes
Yes
Gemini
Yes
Google Cloud
Yes (Vision Support)
Yes
Nvidia API Catalog
Yes
Yes
Yes
Nvidia NIM
Yes
Nvidia Triton Inference Server
Yes
AWS Bedrock
Yes
Yes
OpenRouter
Yes
Cohere
Yes
Yes
Yes
together.ai
Yes
Ollama
Yes
Yes
Mistral AI
Yes
groqcloud
Yes
Replicate
Yes
Yes
Hugging Face
Yes
Yes
Xorbits Inference
Yes
Yes
Yes
Yes
Yes
Zhipu AI
Yes (Function Calling, Vision Support)
Yes
Baichuan
Yes
Yes
Spark
Yes
Minimax
Yes (Function Calling)
Yes
Tongyi
Yes
Yes
Yes
Wenxin
Yes
Yes
Moonshot AI
Yes (Function Calling)
Tencent Cloud
Yes
Stepfun
Yes (Function Calling, Vision Support)
VolcanoEngine
Yes
Yes
01.AI
Yes
360 Zhinao
Yes
Azure AI Studio
Yes
Yes
deepseek
Yes (Function Calling)
Tencent Hunyuan
Yes
SILICONFLOW
Yes
Yes
Jina AI
Yes
Yes
ChatGLM
Yes
Xinference
Yes (Function Calling, Vision Support)
Yes
Yes
OpenLLM
Yes
Yes
LocalAI
Yes
Yes
Yes
Yes
OpenAI API-Compatible
Yes
Yes
Yes
PerfXCloud
Yes
Yes
Lepton AI
Yes
novita.ai
Yes
Amazon Sagemaker
Yes
Yes
Yes
Text Embedding Inference
Yes
Yes
GPUStack
Yes (Function Calling, Vision Support)
Yes
Yes
GPUStack
Yes (Function Calling, Vision Support)
Yes
Yes
Yes
Yes