Skip to main content

Overview

Conductor’s AI provider system enables Think members to work with multiple AI providers (OpenAI, Anthropic, Cloudflare Workers AI, custom endpoints) through a unified interface. The system automatically handles routing, failover, and provider-specific configurations.
- member: analyze-content
  type: Think
  config:
    provider: anthropic  # or openai, cloudflare, custom
    model: claude-3-5-sonnet-20241022
    temperature: 0.7
  input:
    prompt: Analyze this content: ${input.text}

Supported Providers

Anthropic (Claude)

Official Anthropic provider for Claude models:
provider: anthropic
model: claude-3-5-sonnet-20241022
Models:
  • claude-3-5-sonnet-20241022 - Most capable
  • claude-3-opus-20240229 - Highest intelligence
  • claude-3-sonnet-20240229 - Balanced
  • claude-3-haiku-20240307 - Fastest
Configuration:
env.ANTHROPIC_API_KEY = 'sk-ant-...'

OpenAI (GPT)

Official OpenAI provider for GPT models:
provider: openai
model: gpt-4-turbo-preview
Models:
  • gpt-4-turbo-preview - Latest GPT-4
  • gpt-4 - Most capable
  • gpt-3.5-turbo - Fast and efficient
Configuration:
env.OPENAI_API_KEY = 'sk-...'

Cloudflare Workers AI

Cloudflare’s edge AI models (no API key needed):
provider: cloudflare
model: '@cf/meta/llama-3.1-8b-instruct'
Models:
  • @cf/meta/llama-3.1-8b-instruct - Fast, edge-optimized
  • @cf/meta/llama-2-7b-chat-int8 - Smaller, faster
  • @cf/mistral/mistral-7b-instruct-v0.1 - Alternative
Configuration:
# wrangler.toml
[ai]
binding = "AI"

Custom Provider

Support for custom AI endpoints (OpenAI-compatible):
provider: custom
model: your-model-name
apiEndpoint: https://your-api.com/v1/chat/completions

Provider Selection

Explicit Provider

Specify provider directly in config:
- member: generate-text
  type: Think
  config:
    provider: anthropic
    model: claude-3-5-sonnet-20241022

Default Provider

If no provider specified, defaults to Anthropic:
- member: generate-text
  type: Think
  config:
    model: claude-3-5-sonnet-20241022  # Uses anthropic by default

Environment-Based

Choose provider based on environment:
- member: generate-text
  type: Think
  config:
    provider: ${env.AI_PROVIDER}  # Set via environment variable
    model: ${env.AI_MODEL}

Configuration

API Keys

Store API keys in environment variables:
# .dev.vars (local development)
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...

# Cloudflare Dashboard (production)
# Add as secrets via dashboard or wrangler
# Set via wrangler
wrangler secret put ANTHROPIC_API_KEY
wrangler secret put OPENAI_API_KEY

Provider Config

Common configuration options:
config:
  provider: anthropic
  model: claude-3-5-sonnet-20241022
  temperature: 0.7      # Creativity (0-1)
  maxTokens: 1000       # Max response length
  apiKey: ${env.CUSTOM_KEY}  # Override default key
  apiEndpoint: https://api.example.com  # Custom endpoint
  systemPrompt: You are a helpful assistant  # System message

Features

Automatic Validation

Providers validate configuration before execution:
// Validates API key exists
// Validates model is supported
// Validates bindings are configured

const result = await thinkMember.execute(context);
// Throws clear error if misconfigured

Token Usage Tracking

Automatic token counting:
const response = await thinkMember.execute(context);

console.log(response.tokensUsed); // 234
console.log(response.model); // 'claude-3-5-sonnet-20241022'
console.log(response.provider); // 'anthropic'

Provider Metadata

Access provider-specific details:
const response = await thinkMember.execute(context);

console.log(response.metadata);
// {
//   raw: {...},  // Raw provider response
//   finishReason: 'stop',
//   ...
// }

Advanced Patterns

Multi-Provider Fallback

Try multiple providers:
flow:
  - member: try-anthropic
    type: Think
    config:
      provider: anthropic
      model: claude-3-5-sonnet-20241022
    input:
      prompt: ${input.prompt}
  
  - member: fallback-openai
    condition: ${try-anthropic.error}
    type: Think
    config:
      provider: openai
      model: gpt-4-turbo-preview
    input:
      prompt: ${input.prompt}

Cost Optimization

Use cheaper models for simple tasks:
flow:
  - member: classify
    type: Think
    config:
      provider: cloudflare  # Free, fast
      model: '@cf/meta/llama-3.1-8b-instruct'
    input:
      prompt: Classify sentiment: ${input.text}
  
  - member: detailed-analysis
    condition: ${classify.output.needsDetail}
    type: Think
    config:
      provider: anthropic  # More capable
      model: claude-3-5-sonnet-20241022
    input:
      prompt: Detailed analysis: ${input.text}

Provider Routing

Route based on task type:
flow:
  - member: choose-provider
    type: Function
    config:
      handler: |-
        (input) => {
          if (input.taskType === 'creative') {
            return { provider: 'anthropic', model: 'claude-3-5-sonnet-20241022' };
          } else if (input.taskType === 'quick') {
            return { provider: 'cloudflare', model: '@cf/meta/llama-3.1-8b-instruct' };
          } else {
            return { provider: 'openai', model: 'gpt-4-turbo-preview' };
          }
        }
  
  - member: execute-task
    type: Think
    config:
      provider: ${choose-provider.output.provider}
      model: ${choose-provider.output.model}
    input:
      prompt: ${input.prompt}

Best Practices

  1. Set API keys via secrets - Never hardcode keys
  2. Use appropriate models - Match model to task complexity
  3. Handle errors - Implement fallback providers
  4. Monitor token usage - Track costs
  5. Cache responses - Avoid redundant AI calls
  6. Test locally - Use Cloudflare provider for free testing
  7. Validate configuration - Check provider availability at startup
  8. Use system prompts - Set consistent behavior
  9. Implement timeouts - Prevent hanging requests
  10. Log provider usage - Debug and optimize