OpenAI
OpenAI serves as the primary LLM provider in Lillo, implemented through the OpenAI Node.js SDK with support for GPT models and function calling.
Implementation
Provider Setup
import OpenAI from 'openai';
class OpenAIProvider implements LLMProvider {
private client: OpenAI;
constructor() {
this.client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
}
}
Message Generation
async generateResponse(
messages: Array<{ role: string; content: string }>
): Promise<LLMResponse> {
const formattedMessages = messages.map(msg => ({
role: msg.role as 'user' | 'assistant' | 'system',
content: msg.content
}));
// Implementation details
}
Function Calling
Available Functions
const functions = {
generate_image: {
name: "generate_image",
description: "Generate an image based on the prompt",
parameters: {
type: "object",
properties: {
prompt: {
type: "string",
description: "The description of the image to generate"
}
},
required: ["prompt"]
}
},
get_weather: {
name: "get_weather",
description: "Get current weather data",
parameters: {
type: "object",
properties: {
location: {
type: "object",
description: "Location details",
properties: {
name: { type: "string" },
country: { type: "string" },
coordinates: {
type: "object",
properties: {
lat: { type: "number" },
lon: { type: "number" }
}
}
}
}
}
}
}
}
Error Handling
Common Errors
Rate limiting
Content policy violations
Token limits
Invalid requests
Error Response Format
interface ErrorResponse {
error: true;
content: string;
type?: string;
retryAfter?: number;
}
Best Practices
Configuration
Use environment variables for API keys
Configure request timeouts
Set appropriate temperature
Manage token limits
Message Handling
Validate message roles
Clean input content
Format system prompts
Handle streaming properly
Security
Secure API key storage
Input validation
Response sanitization
Error masking
Related Documentation
Last updated