Models

Models

The models section defines the LLMs your agent will use. Each model has a unique identifier and configuration.

Basic Structure

models:
  - id: main_model
    type: claude
    version: "claude-3-7-sonnet"
    config:
      temperature: 0.7
      max_tokens: 1000

Model Types

NudgeLang supports various model types:

1. Claude

models:
  - id: claude_model
    type: claude
    version: "claude-3-7-sonnet"  # or claude-3-5-sonnet, claude-3-haiku
    config:
      temperature: 0.7
      max_tokens: 1000
      top_p: 0.9
      top_k: 40

2. GPT

models:
  - id: gpt_model
    type: gpt
    version: "gpt-4"  # or gpt-3.5-turbo
    config:
      temperature: 0.7
      max_tokens: 1000
      presence_penalty: 0
      frequency_penalty: 0

3. Custom Models

models:
  - id: custom_model
    type: custom
    endpoint: "https://api.example.com/v1/chat"
    config:
      temperature: 0.7
      max_tokens: 1000
      headers:
        Authorization: "${env.API_KEY}"

Model Configuration

Common Parameters

config:
  # Temperature controls randomness (0.0 to 1.0)
  temperature: 0.7
  
  # Maximum tokens in the response
  max_tokens: 1000
  
  # Stop sequences
  stop: ["\n", "Human:", "Assistant:"]
  
  # System message
  system: "You are a helpful assistant."
  
  # Model-specific parameters
  top_p: 0.9
  top_k: 40

Environment-Specific Configuration

models:
  - id: main_model
    type: claude
    version: "claude-3-7-sonnet"
    config:
      temperature: 0.7
      max_tokens: 1000
    environments:
      dev:
        version: "claude-3-5-sonnet"  # Use cheaper model in dev
        max_tokens: 500
      production:
        max_tokens: 2000

Model Chaining

You can chain multiple models together:

models:
  - id: classifier
    type: claude
    version: "claude-3-5-haiku"
    config:
      temperature: 0.1
      max_tokens: 150
  
  - id: main_model
    type: claude
    version: "claude-3-7-sonnet"
    config:
      temperature: 0.7
      max_tokens: 1000

Model Fallbacks

Configure fallback models for reliability:

models:
  - id: primary_model
    type: claude
    version: "claude-3-7-sonnet"
    fallback:
      - type: claude
        version: "claude-3-5-sonnet"
      - type: gpt
        version: "gpt-4"

Complete Example

Here’s a complete models configuration:

models:
  - id: classifier
    type: claude
    version: "claude-3-5-haiku"
    config:
      temperature: 0.1
      max_tokens: 150
      system: "You are a classification model."
  
  - id: main_model
    type: claude
    version: "claude-3-7-sonnet"
    config:
      temperature: 0.7
      max_tokens: 1000
      top_p: 0.9
      top_k: 40
      system: "You are a helpful assistant."
    environments:
      dev:
        version: "claude-3-5-sonnet"
        max_tokens: 500
      production:
        max_tokens: 2000
    fallback:
      - type: claude
        version: "claude-3-5-sonnet"
      - type: gpt
        version: "gpt-4"

Best Practices

  1. Model Selection: Choose models based on task requirements
  2. Cost Optimization: Use cheaper models for simpler tasks
  3. Environment Configuration: Adjust settings per environment
  4. Fallback Strategy: Implement fallback models for reliability
  5. Token Management: Set appropriate max_tokens limits
  6. Temperature Control: Adjust temperature based on task needs

Next Steps

Last updated on