Overview
Overview
Lillo's chat system provides a robust, context-aware conversation engine powered by GPT-4o-mini. The system supports both private chats and group interactions, with intelligent message handling and response generation. It features multi-agent support, allowing different AI personalities to coexist and interact within the same chat environment.
Core Components
1. Message Handling
Message Reception: Processes incoming messages through webhook handlers
Response Triggers:
Direct mentions (
@botname
)Private messages
Replies to bot messages
Command prefixes (
/
)
Multi-Agent Support:
Agent-specific configurations
Independent chat histories
Custom personalities
2. Context Management
Chat History: Maintains conversation context using PostgreSQL
Message Threading: Supports reply chains and conversation flow
State Persistence:
Tracks user and chat states
Maintains agent preferences
Caches recent interactions
3. Response Generation
AI Integration:
Uses GPT-4o-mini for natural language processing
Supports model preference per chat/agent
Dynamic system prompt generation
Function Calling: Supports tool integration for:
Image generation
Weather data
Market data (Gecko/Solana)
Time information
Message Formatting:
Handles mentions
Supports emojis
HTML formatting
Special character escaping
Database Schema
Message Flow
Reception:
Processing:
Message cleaning and validation
Context retrieval (agent-specific)
System prompt generation
Model preference handling
Function call detection
Response:
AI response generation
Function call handling
Message formatting
Reply sending
History update
Special Features
1. Welcome Messages
Automatic greeting for new group members
Personalized welcome messages
Context-aware introductions
Agent-specific greetings
2. Error Handling
Content policy violations
Rate limiting (per user/function)
Network errors
Invalid inputs
Database errors
User-friendly messages
3. Message Formatting
User mentions
Emoji support
HTML formatting
Reply threading
Special character escaping
4. Model Preferences
Per-chat model selection
Agent-specific defaults
Dynamic model switching
Preference persistence
Performance Optimization
Database Indexing:
Caching:
Model preferences
Agent configurations
Recent chat history
Function call cooldowns
Rate Limiting:
Function call cooldowns
API request throttling
Message rate limits
Per-user quotas
Query Optimization:
Efficient history retrieval
Indexed lookups
Limited result sets
Chronological ordering
Security
Input Validation:
Message content sanitization
Command parameter validation
User permission checks
Type safety enforcement
Access Control:
Role-based permissions
Group membership validation
Command restrictions
Function call authorization
Error Messages:
User-friendly error responses
Security-aware error details
Rate limit notifications
Content policy feedback
Data Protection:
Secure token handling
Input sanitization
Output escaping
Error masking
Related Documentation
Last updated