LLM System
The LLM System provides a unified interface for interacting with various language model providers through a standardized API. This modular system supports multiple providers while maintaining consistent behavior across implementations.
Core Components
- Base Provider Interface - Abstract base class for all LLM providers
- Model Metadata - Information about supported models
- Usage Tracking - Cost and token tracking
- Retry Mechanism - Automatic retry and fallback logic
Key Features
- Multi-provider Support: Single interface for OpenAI, Anthropic, Gemini and more
- Operation Modes: Both synchronous and asynchronous operations
- Advanced Capabilities:
- Streaming responses
- Reasoning augmentation
- Template-based prompting
- Observability: Built-in usage tracking and monitoring
Getting Started
python
from aicore.llm import Llm
from aicore.config import load_config
# Load configuration
config = load_config("config.yml")
# Initialize LLM client
llm = Llm.from_config(config)
# Make completion request
response = llm.complete("Hello, how are you?")
Next Steps
- Explore Provider Implementations for specific provider details
- Learn about Configuration Options for fine-tuning behavior
- Check Examples for practical implementation patterns