Provider System
The Provider System enables seamless integration with multiple LLM and embedding providers through a standardized interface, allowing easy switching between different AI services.
Supported Providers
AiCore currently supports the following providers:
- OpenAI - GPT models and embeddings
- Anthropic - Claude models
- Gemini - Google's Gemini models
- Groq - Ultra-fast inference
- Mistral - Open-weight models
- NVIDIA - NVIDIA AI Foundation models
- OpenRouter - Unified API for multiple providers
Key Features
- Standardized Interface: Consistent API across all providers
- Automatic Retry: Built-in error handling and retry mechanisms
- Dynamic Pricing: Real-time cost calculation per request
- Async Support: Native asynchronous operations
- Configuration Flexibility: Provider-specific settings through YAML
Getting Started
To use a provider:
- Configure your provider in the config file
- Import the desired provider class
- Initialize with your configuration
- Make requests through the standardized interface
For detailed instructions on each provider, select from the list above.