# Factory

## Architecture

### Core Interfaces

```typescript
interface LLMResponse {
  content: string;
  totalTokens?: number;
  toolCalls?: Array<{
    id: string;
    type: 'function';
    function: {
      name: string;
      arguments: string;
    }
  }>;
}

interface LLMProvider {
  generateResponse(
    messages: Array<{ role: string; content: string }>, 
    cleanContent: string
  ): Promise<LLMResponse>;
}

export type ModelType = 'OPENAI' | 'GEMINI' | 'GROK' | 'DEEPSEEK';
```

### Factory Implementation

```typescript
export class LLMFactory {
  private static instance: LLMFactory;
  private providers: Map<ModelType, LLMProvider>;
  private currentProvider: LLMProvider | null;

  private constructor() {
    this.providers = new Map();
    this.providers.set('OPENAI', new OpenAIProvider());
    this.providers.set('GEMINI', new GeminiProvider());
    this.providers.set('GROK', new GrokProvider());
    this.providers.set('DEEPSEEK', new DeepSeekProvider());
    this.currentProvider = this.providers.get('OPENAI') || null;
  }

  public static getInstance(): LLMFactory;
  public getAvailableModels(): ModelType[];
  public setModel(model: ModelType): boolean;
  public getCurrentModel(): ModelType | null;
  public getCurrentProvider(): LLMProvider | null;
  public generateResponse(
    messages: Array<{ role: string; content: string }>,
    cleanContent: string
  ): Promise<LLMResponse>;
}
```

## Providers

### OpenAI Provider

```typescript
class OpenAIProvider implements LLMProvider {
  private client: OpenAI;

  constructor() {
    this.client = new OpenAI({
      apiKey: process.env.OPENAI_API_KEY,
    });
  }

  async generateResponse(
    messages: Array<{ role: string; content: string }>
  ): Promise<LLMResponse>;
}
```

* Model: gpt-4o
* Full function calling support
* Token usage tracking
* Error handling and retries
* Standard OpenAI SDK integration

### Gemini Provider

```typescript
class GeminiProvider implements LLMProvider {
  private model: GenerativeModel;

  constructor() {
    const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY!);
    this.model = genAI.getGenerativeModel({ model: 'gemini-pro' });
  }

  async generateResponse(
    messages: Array<{ role: string; content: string }>,
    cleanContent: string
  ): Promise<LLMResponse>;
}
```

* Model: gemini-pro
* Chat support without function calling
* Role-based message history (assistant → model)
* Custom error handling
* Google AI SDK integration

### Grok Provider

```typescript
class GrokProvider implements LLMProvider {
  private client: OpenAI;

  constructor() {
    this.client = new OpenAI({
      apiKey: process.env.XAI_API_KEY,
      baseURL: 'https://api.x.ai/v1',
    });
  }

  async generateResponse(
    messages: Array<{ role: string; content: string }>,
    cleanContent: string
  ): Promise<LLMResponse>;
}
```

* Model: grok-2-latest
* OpenAI-compatible API
* Function calling support
* Response normalization
* Tool call format standardization

### DeepSeek Provider

```typescript
class DeepSeekProvider implements LLMProvider {
  async generateResponse(
    messages: Array<{ role: string; content: string }>,
    cleanContent: string
  ): Promise<LLMResponse>;
}
```

* Model: deepseek-chat
* Direct API integration
* Function calling support
* Custom message transformation
* Direct HTTP request handling

## Function Calling

Supported tools across compatible providers:

### Core Tools

```typescript
const tools = [{
  type: "function",
  function: {
    name: "generate_image",
    description: "Generate an image based on the user's request",
    parameters: {
      type: "object",
      properties: {
        prompt: {
          type: "string",
          description: "The description of the image to generate"
        }
      },
      required: ["prompt"]
    }
  }
},
{
  type: "function",
  function: {
    name: "get_weather",
    description: "Get current weather data for any major city",
    parameters: {
      type: "object",
      properties: {
        location: {
          type: "object",
          properties: {
            name: { type: "string" },
            country: { type: "string" },
            coordinates: {
              type: "object",
              properties: {
                lat: { type: "number" },
                lon: { type: "number" }
              },
              required: ["lat", "lon"]
            }
          },
          required: ["name", "country", "coordinates"]
        },
        type: {
          type: "string",
          enum: ["current", "forecast"]
        }
      },
      required: ["location"]
    }
  }
},
{
  type: "function",
  function: {
    name: "get_market_data",
    description: "Get cryptocurrency market data",
    parameters: {
      type: "object",
      properties: {
        type: {
          type: "string",
          enum: ["token", "trending", "top", "latest", "boosted"]
        },
        query: {
          type: "string",
          description: "Token symbol or address"
        }
      },
      required: ["type"]
    }
  }
},
{
  type: "function",
  function: {
    name: "get_time",
    description: "Gets the current time for a location",
    parameters: {
      type: "object",
      properties: {
        location: {
          type: "object",
          properties: {
            name: { type: "string" },
            country: { type: "string" },
            coordinates: {
              type: "object",
              properties: {
                lat: { type: "number" },
                lon: { type: "number" }
              }
            }
          }
        }
      },
      required: ["location"]
    }
  }
}]
```

## Usage

### Basic Usage

```typescript
const factory = LLMFactory.getInstance();
const response = await factory.generateResponse(messages, cleanContent);
```

### Model Selection

```typescript
const factory = LLMFactory.getInstance();
const success = factory.setModel('GEMINI');
if (success) {
  const response = await factory.generateResponse(messages, cleanContent);
}
```

### Error Handling

```typescript
try {
  const response = await factory.generateResponse(messages, cleanContent);
} catch (error) {
  console.error('LLM Error:', error);
  // Handle provider-specific errors
}
```

## Best Practices

### Provider Selection

* Use OpenAI for complex tasks requiring function calling
* Use Gemini for basic chat interactions
* Use Grok for OpenAI-compatible function calling
* Use DeepSeek for specialized tasks
* Consider rate limits and costs
* Match provider capabilities to task requirements

### Error Handling

* Implement provider-specific error handling
* Handle rate limits and quotas
* Validate responses
* Log errors appropriately
* Implement retries where appropriate
* Handle network failures
* Normalize error formats

### Performance

* Reuse factory instance (singleton pattern)
* Monitor token usage
* Implement caching where appropriate
* Handle streaming responses efficiently
* Optimize message history
* Clean content processing
* Validate tool calls

## Related Documentation

* [Model Preferences](https://docs.lillo.ai/llm/preferences)
* [Provider Configuration](https://github.com/lillo-ai/website/blob/master/docs/core/framework/configuration.md)
* [Function Calling](https://docs.lillo.ai/llm/functions)
