Adding a Model Provider
To begin using Aera, you’ll first need to configure a model provider. This enables your applications to interact with large language models (LLMs) for tasks such as chat, embedding, or speech-to-text.
Last updated
To begin using Aera, you’ll first need to configure a model provider. This enables your applications to interact with large language models (LLMs) for tasks such as chat, embedding, or speech-to-text.
Last updated
Navigate to Settings -> Model Providers in the Aera interface.
Click Add Provider and select your desired model provider from the list.
Enter the required API key or access credentials.
You can obtain these from your chosen provider’s official dashboard.
Save your configuration.
Once added, your selected models will be available for use in applications across Aera.
Aera supports a variety of model types depending on your application needs:
System Inference Models For general tasks such as chat, text generation, and summarisation.
Supported: OpenAI, Anthropic, Hugging Face, Replicate, Xinference, Ollama, LocalAI, and more.
Embedding Models Used for indexing knowledge bases and understanding user input.
Supported: OpenAI, ZHIPU, Jina AI.
Rerank Models Enhance search output relevance by reordering retrieved documents.
Supported: Cohere, Jina AI.
Speech-to-Text Models Convert audio input into text, useful in voice interfaces.
Supported: OpenAI.
Model providers in Aera fall into two categories:
These include platforms like OpenAI and Anthropic that offer full model suites.
Add the provider by entering your API key in Aera.
Once integrated, you can access all models offered by that provider.
These offer individual models hosted on third-party services like Hugging Face or Replicate.
Integrate models one by one depending on your use case.
Some providers may require additional configuration.
Useful links for integration:
Once added, your models are ready to be used in any application you create within Aera.