Aera
  • Welcome to the new aERA.
  • Features + Specs
  • Our Token
  • Get Started with Aera
    • Adding a Model Provider
    • Creating an App
    • Supported Model Providers
    • Custom Workflows
    • Creating a Knowledge Base
    • Plugins
  • Self-Hosting Aera
  • FAQs
  • Links
Powered by GitBook
On this page
Export as PDF
  1. Get Started with Aera

Supported Model Providers

Aera supports the following providers natively. You may need to install a extensions to use some of these providers.

Provider
LLM
Text Embedding
Rerank
Speech to text
TTS

OpenAI

Yes (Function Calling, Vision Support)

Yes

Yes

Yes

Anthropic

Yes (Function Calling)

Azure OpenAI

Yes (Function Calling, Vision Support)

Yes

Yes

Yes

Gemini

Yes

Google Cloud

Yes (Vision Support)

Yes

Nvidia API Catalog

Yes

Yes

Yes

Nvidia NIM

Yes

Nvidia Triton Inference Server

Yes

AWS Bedrock

Yes

Yes

OpenRouter

Yes

Cohere

Yes

Yes

Yes

together.ai

Yes

Ollama

Yes

Yes

Mistral AI

Yes

groqcloud

Yes

Replicate

Yes

Yes

Hugging Face

Yes

Yes

Xorbits Inference

Yes

Yes

Yes

Yes

Yes

Zhipu AI

Yes (Function Calling, Vision Support)

Yes

Baichuan

Yes

Yes

Spark

Yes

Minimax

Yes (Function Calling)

Yes

Tongyi

Yes

Yes

Yes

Wenxin

Yes

Yes

Moonshot AI

Yes (Function Calling)

Tencent Cloud

Yes

Stepfun

Yes (Function Calling, Vision Support)

VolcanoEngine

Yes

Yes

01.AI

Yes

360 Zhinao

Yes

Azure AI Studio

Yes

Yes

deepseek

Yes (Function Calling)

Tencent Hunyuan

Yes

SILICONFLOW

Yes

Yes

Jina AI

Yes

Yes

ChatGLM

Yes

Xinference

Yes (Function Calling, Vision Support)

Yes

Yes

OpenLLM

Yes

Yes

LocalAI

Yes

Yes

Yes

Yes

OpenAI API-Compatible

Yes

Yes

Yes

PerfXCloud

Yes

Yes

Lepton AI

Yes

novita.ai

Yes

Amazon Sagemaker

Yes

Yes

Yes

Text Embedding Inference

Yes

Yes

GPUStack

Yes (Function Calling, Vision Support)

Yes

Yes

GPUStack

Yes (Function Calling, Vision Support)

Yes

Yes

Yes

Yes

PreviousCreating an AppNextCustom Workflows

Last updated 1 month ago