Gemini
Overview Gemini integration is implemented using the Google Generative AI SDK, specifically using the 'gemini-pro' model.
Implementation
Provider Setup
import { GoogleGenerativeAI, GenerativeModel } from '@google/generative-ai';
class GeminiProvider implements LLMProvider {
private model: GenerativeModel;
constructor() {
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY!);
this.model = genAI.getGenerativeModel({ model: 'gemini-pro' });
}
}Message Generation
async generateResponse(
messages: Array<{ role: string; content: string }>,
cleanContent: string
): Promise<LLMResponse> {
const chat = this.model.startChat({
history: messages.slice(0, -1).map(msg => ({
role: msg.role === 'assistant' ? 'model' : 'user',
parts: [{ text: msg.content }],
})),
});
const result = await chat.sendMessage(
messages[messages.length - 1].content
);
const response = await result.response;
return {
content: response.text(),
totalTokens: undefined,
toolCalls: undefined
};
}Features
Supported Capabilities
Limitations
Error Handling
Implementation
Common Errors
Best Practices
Configuration
Message Handling
Related Documentation
Last updated