Skip to main content

Your First Project

From zero to deployed edge workflow in 5 minutes. No ceremony, no boilerplate.

Prerequisites

You need:
  • Node.js 18+ (nodejs.org)
  • A Cloudflare account (free tier works fine)
  • Wrangler CLI (we’ll install it)
Check Node.js:
node --version  # Should be 18+

Install Wrangler

npm install -g wrangler
wrangler login
This opens your browser to authorize Wrangler with Cloudflare.

Create Your Project

npm create cloudflare@latest my-conductor-app
When prompted:
  • Type: “Hello World” Worker (we’ll add Conductor next)
  • TypeScript: Yes (recommended)
  • Git: Yes
  • Deploy: Not yet

Add Conductor

cd my-conductor-app
npm install @ensemble-edge/conductor

Project Structure

Your project should look like this:
my-conductor-app/
 src/
    index.ts           # Worker entry point
 agents/                # Your custom agents (optional)
 ensembles/             # Your ensemble configs
    hello.yaml         # We'll create this
 components/            # Versioned components (prompts, configs, etc.)
 wrangler.toml          # Cloudflare config
 package.json
 tsconfig.json

Configure Cloudflare

Edit wrangler.toml:
name = "my-conductor-app"
main = "src/index.ts"
compatibility_date = "2024-01-01"

# Enable Workers AI (free tier: 10k requests/day)
[ai]
binding = "AI"
That’s it. No complex setup, no dozens of config files.

Create Your First Ensemble

Create ensembles/hello.yaml:
ensemble: hello
description: Simple greeting workflow

agents:
  - name: greeter
    operation: think
    config:
      provider: cloudflare
      model: '@cf/meta/llama-3.1-8b-instruct'
      prompt: |
        Generate a friendly greeting for ${input.name}.
        Make it warm and welcoming.

output:
  greeting: ${greeter.output}
That’s your entire workflow. No classes to extend, no interfaces to implement, just declare what you want.

Wire It Up

Edit src/index.ts:
import { Conductor } from '@ensemble-edge/conductor';

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const conductor = new Conductor({ env });

    // Parse request
    const url = new URL(request.url);
    const name = url.searchParams.get('name') || 'World';

    // Execute ensemble
    const result = await conductor.execute('hello', { name });

    // Return result
    return Response.json(result);
  }
};

Test Locally

npm run dev
Visit http://localhost:8787?name=Alice You should see:
{
  "greeting": "Hello Alice! Welcome! It's wonderful to have you here..."
}
It works. Your first edge AI workflow is running locally.

Deploy to Production

npm run deploy
Output:
Published my-conductor-app (0.5 sec)
  https://my-conductor-app.your-subdomain.workers.dev
Current Deployment ID: abc123...
Test it:
curl "https://my-conductor-app.your-subdomain.workers.dev?name=World"
You’re live. Your workflow is running at the edge in 300+ cities worldwide.
  • Cold start: <50ms
  • Execution: ~200ms
  • Cost: Free tier covers 100k requests/day

What Just Happened?

  1. Conductor parsed your YAML ensemble
  2. Operation: think invoked Cloudflare’s Llama model
  3. Result was returned as JSON
  4. Deployed globally to Cloudflare’s edge network
No servers. No containers. No Kubernetes. Just a YAML file and 10 lines of TypeScript.

Add More Operations

Let’s make it more interesting. Update ensembles/hello.yaml:
ensemble: hello
description: AI greeting with data storage

agents:
  # Generate greeting
  - name: greeter
    operation: think
    config:
      provider: cloudflare
      model: '@cf/meta/llama-3.1-8b-instruct'
      prompt: |
        Generate a unique greeting for ${input.name}.
        Be creative and memorable.

  # Store greeting count
  - name: counter
    operation: storage
    config:
      type: kv
      action: get
      key: greeting-count
      default: 0

  # Increment count
  - name: increment
    operation: code
    config:
      code: |
        return { count: ${counter.output.value} + 1 };

  # Save new count
  - name: save-count
    operation: storage
    config:
      type: kv
      action: put
      key: greeting-count
      value: ${increment.output.count}

output:
  greeting: ${greeter.output}
  count: ${increment.output.count}
  message: "You are visitor #${increment.output.count}"
Add KV namespace to wrangler.toml:
[[kv_namespaces]]
binding = "CACHE"
id = "your-kv-id"  # Create with: wrangler kv:namespace create CACHE
Create the KV namespace:
wrangler kv:namespace create CACHE
# Copy the 'id' from output to wrangler.toml
Deploy:
npm run deploy
Test:
curl "https://my-conductor-app.your-subdomain.workers.dev?name=Alice"
Result:
{
  "greeting": "Greetings, Alice! What a delight to meet you...",
  "count": 1,
  "message": "You are visitor #1"
}
Each request increments the counter. You just added:
  • AI text generation (operation: think)
  • Data storage (operation: storage with KV)
  • Data transformation (operation: code)
All declaratively. No SQL migrations, no ORM setup, no cache configuration.

Project Structure (Full)

Here’s what a real project looks like:
my-conductor-app/
 src/
    index.ts                  # Worker entry point
 ensembles/
    hello.yaml                # Simple greeting
    analytics.yaml            # Data processing
    newsletter.yaml           # Email campaign
 agents/                       # Custom agents (optional)
    scraper/
       agent.yaml
    validator/
        agent.yaml
 components/
    prompts/
       greeting.md           # Versioned prompts
       analysis.md
    configs/
        api.json              # Versioned configs
 tests/
    hello.test.ts             # Ensemble tests
    analytics.test.ts
 wrangler.toml                 # Cloudflare config
 .edgit/
    components.json           # Edgit registry
 package.json
 tsconfig.json

Common Patterns

Pattern 1: API Wrapper

Wrap external APIs with caching:
ensemble: api-wrapper

agents:
  - name: fetch-data
    operation: http
    config:
      url: https://api.example.com/data
      method: GET
    cache:
      ttl: 3600  # Cache for 1 hour

output:
  data: ${fetch-data.output.body}

Pattern 2: Multi-Step Processing

Chain operations together:
ensemble: process-document

agents:
  # Fetch document
  - name: fetch
    operation: http
    config:
      url: ${input.url}

  # Extract text from PDF
  - name: extract
    operation: pdf
    config:
      action: extract
      content: ${fetch.output.body}

  # Analyze with AI
  - name: analyze
    operation: think
    config:
      provider: openai
      model: gpt-4o-mini
      prompt: |
        Analyze this document: ${extract.output.text}
        Provide key insights.

  # Store results
  - name: store
    operation: storage
    config:
      type: d1
      query: |
        INSERT INTO documents (url, analysis, timestamp)
        VALUES (?, ?, ?)
      params:
        - ${input.url}
        - ${analyze.output}
        - ${Date.now()}

output:
  analysis: ${analyze.output}

Pattern 3: Conditional Flow

Execute agents conditionally:
ensemble: smart-routing

agents:
  # Check cache first
  - name: check-cache
    operation: storage
    config:
      type: kv
      action: get
      key: result-${input.query}

  # Only run AI if cache miss
  - name: generate
    operation: think
    condition: ${check-cache.output.value === null}
    config:
      provider: cloudflare
      model: '@cf/meta/llama-3.1-8b-instruct'
      prompt: ${input.query}

  # Save to cache if generated
  - name: save-cache
    operation: storage
    condition: ${generate.executed}
    config:
      type: kv
      action: put
      key: result-${input.query}
      value: ${generate.output}

output:
  result: ${check-cache.output.value || generate.output}
  cached: ${!generate.executed}

Development Workflow

# Start dev server
npm run dev

# Run tests
npm test

# Deploy to staging
wrangler deploy --env staging

# Deploy to production
wrangler deploy --env production

# View logs
wrangler tail

# Check metrics
wrangler deployments list

Next Steps

Troubleshooting

Problem: Can’t authorize with CloudflareFix:
# Try manual token
wrangler login --browser=false
# Follow instructions to paste token
Problem: env.AI is undefinedFix: Add to wrangler.toml:
[ai]
binding = "AI"
Restart dev server after changing config.
Problem: Ensemble 'hello' not foundFix: Ensure YAML file is in ensembles/ directory and named correctly. Conductor auto-discovers all .yaml files in ensembles/.
Problem: KV namespace 'CACHE' not foundFix:
# Create namespace
wrangler kv:namespace create CACHE

# Add ID to wrangler.toml
[[kv_namespaces]]
binding = "CACHE"
id = "abc123..."  # From create command output

Tips

  1. Start simple - One agent, one operation, then add complexity
  2. Use KV liberally - It’s fast (<10ms) and generous (free tier: 100k reads/day)
  3. Test locally first - wrangler dev is your friend
  4. Check logs - wrangler tail shows real-time logs
  5. Mind the limits - Free tier: 100k requests/day, 10k AI requests/day
  6. Version components - Use Edgit to version prompts and configs
  7. Cache aggressively - AI operations are slow, caching is fast
  8. Monitor costs - Check Cloudflare dashboard regularly