Skip to main content

Overview

Bindings connect your Conductor worker to Cloudflare resources like D1 databases, KV namespaces, R2 buckets, and Vectorize indexes. Learn how to configure and use each binding type.

Binding Types

D1 Database

SQL database for structured data. Configuration:
[[d1_databases]]
binding = "DB"                    # Access as env.DB
database_name = "production-db"   # Database name
database_id = "abc-123-def"       # From dashboard
Create:
npx wrangler d1 create production-db
# Copy database_id from output
Usage:
- member: query-users
  type: Data
  config:
    storage: d1
    operation: query
    query: "SELECT * FROM users WHERE active = ?"
  input:
    params: [true]
// Direct access
const result = await env.DB.prepare(
  'SELECT * FROM users WHERE id = ?'
).bind(123).first();
Operations:
  • query - Execute SELECT/INSERT/UPDATE/DELETE
  • batch - Execute multiple queries atomically
  • exec - Execute DDL (CREATE TABLE, etc.)

KV Namespace

Key-value storage for caching and state. Configuration:
[[kv_namespaces]]
binding = "CACHE"           # Access as env.CACHE
id = "abc123def456"         # Production namespace ID
preview_id = "xyz789ghi"    # Preview namespace ID (optional)
Create:
npx wrangler kv:namespace create CACHE
npx wrangler kv:namespace create CACHE --preview  # For dev
Usage:
- member: cache-data
  type: Data
  config:
    storage: kv
    operation: put
    binding: CACHE
  input:
    key: "user:${input.userId}"
    value: ${fetch-user.output}
    expirationTtl: 3600  # 1 hour

- member: get-cached
  type: Data
  config:
    storage: kv
    operation: get
    binding: CACHE
  input:
    key: "user:${input.userId}"
// Direct access
await env.CACHE.put('key', 'value', { expirationTtl: 3600 });
const value = await env.CACHE.get('key');
await env.CACHE.delete('key');
const keys = await env.CACHE.list({ prefix: 'user:' });
Operations:
  • get - Retrieve value by key
  • put - Store key-value pair
  • delete - Remove key
  • list - List keys by prefix

R2 Bucket

Object storage for files and large data. Configuration:
[[r2_buckets]]
binding = "ASSETS"                    # Access as env.ASSETS
bucket_name = "conductor-assets"      # Bucket name
preview_bucket_name = "dev-assets"    # Preview bucket (optional)
Create:
npx wrangler r2 bucket create conductor-assets
Usage:
- member: store-file
  type: Data
  config:
    storage: r2
    operation: put
    binding: ASSETS
  input:
    key: "reports/${input.reportId}.pdf"
    value: ${generate-report.output.pdf}
    metadata:
      contentType: "application/pdf"
      userId: ${input.userId}

- member: get-file
  type: Data
  config:
    storage: r2
    operation: get
    binding: ASSETS
  input:
    key: "reports/${input.reportId}.pdf"
// Direct access
await env.ASSETS.put('file.pdf', pdfData, {
  httpMetadata: {
    contentType: 'application/pdf'
  }
});
const object = await env.ASSETS.get('file.pdf');
const arrayBuffer = await object.arrayBuffer();
await env.ASSETS.delete('file.pdf');
Operations:
  • get - Retrieve object
  • put - Store object
  • delete - Remove object
  • list - List objects by prefix
  • head - Get metadata only

Vectorize Index

Vector database for embeddings and semantic search. Configuration:
[[vectorize]]
binding = "VECTORIZE"      # Access as env.VECTORIZE
index_name = "embeddings"  # Index name
Create:
npx wrangler vectorize create embeddings \
  --dimensions=1536 \
  --metric=cosine
Usage:
- member: search-knowledge
  type: RAG
  config:
    vectorizeBinding: "VECTORIZE"
    indexName: "embeddings"
    operation: query
  input:
    query: ${input.question}
    topK: 3
    scoreThreshold: 0.7
// Direct access
const results = await env.VECTORIZE.query(
  [0.1, 0.2, ...],  // Query vector
  { topK: 5 }
);
Operations:
  • query - Search for similar vectors
  • insert - Add vectors to index
  • upsert - Insert or update vectors
  • delete - Remove vectors

Workers AI

Access to Cloudflare’s AI models. Configuration:
[ai]
binding = "AI"  # Access as env.AI
Usage:
- member: classify
  type: Think
  config:
    provider: workers-ai
    model: "@cf/meta/llama-3.1-8b-instruct"
  input:
    prompt: "Classify sentiment: ${input.text}"
// Direct access
const response = await env.AI.run(
  '@cf/meta/llama-3.1-8b-instruct',
  {
    prompt: 'Hello, how are you?'
  }
);

Durable Objects

Stateful coordination and HITL workflows. Configuration:
[[durable_objects.bindings]]
name = "EXECUTION_STATE"
class_name = "ExecutionState"
script_name = "my-conductor-app"

[[durable_objects.bindings]]
name = "HITL_STATE"
class_name = "HITLState"
script_name = "my-conductor-app"

[[migrations]]
tag = "v1"
new_classes = ["ExecutionState", "HITLState"]
Export in worker:
export { ExecutionState } from '@ensemble-edge/conductor/durable-objects';
export { HITLState } from '@ensemble-edge/conductor/durable-objects';
Usage:
// Get Durable Object instance
const id = env.HITL_STATE.idFromName('approval-123');
const stub = env.HITL_STATE.get(id);

// Call methods
const response = await stub.fetch('https://fake-host/suspend', {
  method: 'POST',
  body: JSON.stringify({ context: {...} })
});

Queue

Async task processing. Configuration:
[[queues.producers]]
binding = "TASK_QUEUE"
queue = "conductor-tasks"

[[queues.consumers]]
queue = "conductor-tasks"
max_batch_size = 10
max_batch_timeout = 30
Create:
npx wrangler queues create conductor-tasks
Usage:
// Producer: Send messages
await env.TASK_QUEUE.send({
  type: 'process-order',
  orderId: 123
});

// Consumer: Receive messages
export default {
  async queue(batch: MessageBatch, env: Env): Promise<void> {
    for (const message of batch.messages) {
      await processTask(message.body);
      message.ack();
    }
  }
};

Multiple Bindings

Same Type, Different Names

# Multiple KV namespaces
[[kv_namespaces]]
binding = "CACHE"
id = "cache-id"

[[kv_namespaces]]
binding = "SESSIONS"
id = "sessions-id"

# Multiple D1 databases
[[d1_databases]]
binding = "PRIMARY_DB"
database_id = "db1-id"

[[d1_databases]]
binding = "ANALYTICS_DB"
database_id = "db2-id"
# Use different bindings
- member: cache-user
  config:
    storage: kv
    operation: put
    binding: CACHE  # First KV

- member: store-session
  config:
    storage: kv
    operation: put
    binding: SESSIONS  # Second KV

Environment-Specific Bindings

# Development
[env.dev]
[[env.dev.d1_databases]]
binding = "DB"
database_id = "dev-db-id"

[[env.dev.kv_namespaces]]
binding = "CACHE"
id = "dev-cache-id"

# Production
[env.production]
[[env.production.d1_databases]]
binding = "DB"
database_id = "prod-db-id"

[[env.production.kv_namespaces]]
binding = "CACHE"
id = "prod-cache-id"

Testing with Bindings

Local Development

# wrangler dev uses preview bindings automatically
npx wrangler dev

Test with Miniflare

import { TestConductor } from '@ensemble-edge/conductor/testing';

const conductor = await TestConductor.create({
  mocks: {
    db: {
      users: [
        { id: 1, name: 'Test User' }
      ]
    },
    kv: {
      'user:1': JSON.stringify({ name: 'Test User' })
    }
  }
});

Binding Limits

D1 Database

  • Max databases per account: 10 (can request increase)
  • Max database size: 10GB
  • Max queries per request: 50
  • Max query execution time: 30 seconds

KV Namespace

  • Max namespaces per account: 100
  • Max key size: 512 bytes
  • Max value size: 25MB
  • Max metadata: 1KB per key
  • Reads: 100,000/day (free), unlimited (paid)
  • Writes: 1,000/day (free), unlimited (paid)

R2 Bucket

  • Max buckets per account: 1,000
  • Max object size: 5TB
  • Storage: 10GB free, then pay-as-you-go
  • Operations: Class A (1 million free), Class B (10 million free)

Vectorize Index

  • Max indexes per account: 100
  • Max dimensions: 1536
  • Max vectors per index: 5 million
  • Max queries per second: 1,000

Durable Objects

  • Max classes per script: 10
  • Max concurrent instances: No hard limit (rate limited)
  • Max storage per object: 128KB alarm + unlimited SQL

Best Practices

  1. Use descriptive binding names - USER_CACHE not KV1
  2. Separate data by environment - Different bindings for dev/prod
  3. Limit binding access - Only bind what you need
  4. Use preview bindings - Test with non-production data
  5. Document bindings - Comment what each binding is for
  6. Monitor usage - Track read/write operations
  7. Implement cleanup - Delete unused keys/objects
  8. Validate binding availability - Check env.BINDING exists
  9. Use appropriate storage - KV for cache, D1 for relational, R2 for files
  10. Version Durable Object migrations - Increment tags properly

Troubleshooting

Binding not found

Error: env.DB is undefined
Solution:
  1. Check wrangler.toml has binding
  2. Verify resource ID is correct
  3. Redeploy: npx wrangler deploy

Wrong binding name

# Error: Binding 'CACHE' not found
- member: cache
  config:
    binding: CACHE  # Check this matches wrangler.toml

Preview vs Production

Using production data in development: Solution:
[[kv_namespaces]]
binding = "CACHE"
id = "prod-cache-id"
preview_id = "dev-cache-id"  # Add this