Making Your First Request
This guide walks you through making your first LLM request using AiCore.
Prerequisites
- Completed installation
- Valid API key for your chosen provider
- Basic Python knowledge
Basic Usage
- First, import the required modules:
python
from aicore.llm import Llm
from aicore.llm.config import LlmConfig
- Configure your LLM provider (example using OpenAI):
python
config = LlmConfig(
provider="openai",
api_key="your_api_key_here",
model="gpt-4o"
)
- Initialize the LLM client:
python
llm = Llm(config=config)
- Make your first request:
python
response = llm.complete("Hello world!")
print(response)
Async Usage
For asynchronous applications:
python
import asyncio
from aicore.llm import Llm
from aicore.llm.config import LlmConfig
async def main():
config = LlmConfig(
provider="openai",
api_key="your_api_key_here",
model="gpt-4o"
)
llm = Llm(config=config)
response = await llm.acomplete("Hello async world!")
print(response)
asyncio.run(main())
Configuration Options
You can customize your requests with:
temperature
: Controls randomness (0-1)max_tokens
: Limits response lengthstream
: Enables streaming responses
Example with custom settings:
python
response = llm.complete(
"Explain quantum computing",
temperature=0.7,
max_tokens=500,
stream=True
)
Next Steps
- Learn about advanced configuration
- Explore provider-specific features
- Check out usage tracking for cost monitoring
- Try the examples for more complex scenarios