Overview
Triggers define how ensembles are invoked. Conductor supports nine trigger types, all configured using the unified trigger: array in your ensemble YAML:
- HTTP - Full web routing with path params, CORS, rate limiting, HTML/JSON responses
- Webhook - Simple HTTP endpoints for external integrations
- MCP - Model Context Protocol tool exposure
- Email - Email routing and processing
- Queue - Cloudflare Queues message processing
- Cron - Scheduled execution with cron expressions
- Build - Static generation at build time
- CLI - Custom developer commands
- Startup - Execute on Worker cold start (initialization)
All triggers use the same configuration pattern:
name: my-ensemble
trigger:
- type: <trigger-type>
# trigger-specific configuration
HTTP Triggers
Full web routing with path parameters, CORS, rate limiting, authentication, and HTML or JSON responses. Use HTTP triggers for building APIs, web pages, and complex web applications.
Basic JSON API
name: users-api
trigger:
- type: http
path: /api/users/:id
methods: [GET]
public: true
responses:
json: {enabled: true}
flow:
- agent: fetch-user
input: {userId: ${input.params.id}}
agents:
- name: fetch-user
operation: data
config:
backend: d1
binding: DB
query: "SELECT * FROM users WHERE id = ?"
params: [${input.userId}]
outputs:
user: ${fetch-user.output[0]}
Access: GET /api/users/123 → Returns JSON
Server-Rendered HTML Page
name: blog-post
trigger:
- type: http
path: /blog/:slug
methods: [GET]
public: true
responses:
html: {enabled: true}
templateEngine: liquid
flow:
- agent: fetch-post
input: {slug: ${input.params.slug}}
- operation: html
config:
template: |
<!DOCTYPE html>
<html>
<head><title>{{ fetch-post.title }}</title></head>
<body>
<article>
<h1>{{ fetch-post.title }}</h1>
<div>{{ fetch-post.content }}</div>
</article>
</body>
</html>
data: ${fetch-post}
agents:
- name: fetch-post
operation: data
config:
backend: d1
binding: DB
query: "SELECT * FROM posts WHERE slug = ?"
params: [${input.slug}]
outputs:
html: ${html.output}
Access: GET /blog/my-post → Returns HTML page
HTTP with Authentication & Rate Limiting
trigger:
- type: http
path: /api/chat
methods: [POST]
auth:
type: bearer
secret: ${env.API_KEY}
rateLimit:
requests: 10
window: 60 # 10 requests per minute
cors:
origin: "https://myapp.com"
credentials: true
responses:
json: {enabled: true}
HTTP Request Context
HTTP triggers automatically parse request data and make it available to your ensemble:
| Field | Description |
|---|
input.method | HTTP method (GET, POST, etc.) |
input.params | Path parameters (/users/:id → input.params.id) |
input.query | Query string parameters |
input.headers | Request headers |
input.body | Request body (parsed JSON) |
input.cookies | Parsed cookies from Cookie header |
Cookie Access:
agents:
- name: check-session
condition: ${input.cookies.session_id}
operation: storage
config:
type: kv
action: get
key: session-${input.cookies.session_id}
To set cookies in responses, use the cookies operation:
agents:
- name: create-session
operation: cookies
config:
action: set
name: session_id
value: ${generate-id.output}
httpOnly: true
secure: true
sameSite: strict
maxAge: 86400
The cookies operation integrates with Location Context for GDPR/CCPA consent-aware cookie management.
HTTP vs Webhook
| Feature | HTTP | Webhook |
|---|
| Path params | ✅ /users/:id | ❌ No |
| Multiple methods | ✅ GET, POST, etc. | ✅ Yes |
| Rate limiting | ✅ Yes | ❌ No |
| CORS | ✅ Yes | ❌ No |
| HTML rendering | ✅ Yes | ❌ No |
| Use case | Web apps, APIs | Simple webhooks |
Rule of thumb: Use http for web routing and pages. Use webhook for receiving webhooks from external services.
Multi-Path HTTP Triggers
Handle multiple related endpoints in a single ensemble using the paths array. This allows one ensemble to serve multiple routes with different HTTP methods and path parameters.
name: users-api
trigger:
- type: http
paths:
- path: /api/v1/users
methods: [GET, POST]
- path: /api/v1/users/:id
methods: [GET, PUT, DELETE]
public: true
flow:
- operation: code
config:
handler: |
const { method, params } = context.input
if (method === 'GET' && params.id) {
// GET /api/v1/users/:id - Fetch single user
return { action: 'fetch-user', userId: params.id }
} else if (method === 'GET') {
// GET /api/v1/users - List users
return { action: 'list-users' }
} else if (method === 'POST') {
// POST /api/v1/users - Create user
return { action: 'create-user', data: context.input.body }
} else if (method === 'PUT' && params.id) {
// PUT /api/v1/users/:id - Update user
return { action: 'update-user', userId: params.id, data: context.input.body }
} else if (method === 'DELETE' && params.id) {
// DELETE /api/v1/users/:id - Delete user
return { action: 'delete-user', userId: params.id }
}
- agent: users-handler
input: ${code.output}
agents:
- name: users-handler
operation: data
config:
backend: d1
binding: DB
query: ${input.action === 'fetch-user' ? 'SELECT * FROM users WHERE id = ?' :
input.action === 'list-users' ? 'SELECT * FROM users' :
input.action === 'create-user' ? 'INSERT INTO users (name, email) VALUES (?, ?)' :
input.action === 'update-user' ? 'UPDATE users SET name = ?, email = ? WHERE id = ?' :
'DELETE FROM users WHERE id = ?'}
outputs:
result: ${users-handler.output}
Benefits of Multi-Path Triggers:
- Organize related endpoints in one ensemble
- Share authentication and middleware across paths
- Reduce configuration duplication
- Keep related business logic together
- Support RESTful API patterns naturally
Path Parameters:
- Use
:param syntax for dynamic segments (e.g., /users/:id, /posts/:slug)
- Access via
${input.params.id}, ${input.params.slug}, etc.
- Works with any HTTP method
Example: Blog API
trigger:
- type: http
paths:
- path: /blog
methods: [GET]
- path: /blog/:slug
methods: [GET]
- path: /blog/:slug/comments
methods: [GET, POST]
public: true
rateLimit:
requests: 100
window: 60
This single ensemble handles:
GET /blog - List all posts
GET /blog/:slug - View single post
GET /blog/:slug/comments - List comments
POST /blog/:slug/comments - Add comment
Complex Website Structure
For full-blown websites with sitemaps, robots.txt, dynamic pages, etc., organize ensembles by route:
ensembles/
├── pages/
│ ├── home.yaml # GET /
│ ├── about.yaml # GET /about
│ ├── blog-list.yaml # GET /blog
│ ├── blog-post.yaml # GET /blog/:slug
│ ├── user-dashboard.yaml # GET /dashboard/:userId
│ └── contact-form.yaml # GET /contact, POST /contact
├── api/
│ ├── users.yaml # GET /api/users/:id
│ ├── posts.yaml # GET /api/posts, POST /api/posts
│ └── search.yaml # GET /api/search
├── static/
│ ├── robots.yaml # GET /robots.txt
│ ├── sitemap.yaml # GET /sitemap.xml
│ └── health.yaml # GET /health
└── auth/
├── login.yaml # POST /auth/login
└── logout.yaml # POST /auth/logout
Each file is an ensemble with trigger: {type: http}:
Example: ensembles/static/robots.yaml
name: robots-txt
trigger:
- type: http
path: /robots.txt
methods: [GET]
public: true
flow:
- operation: code
handler: |
return {
output: `User-agent: *
Allow: /
Sitemap: https://yoursite.com/sitemap.xml`
}
outputs:
content: ${code.output}
Example: ensembles/static/sitemap.yaml
name: sitemap-xml
trigger:
- type: http
path: /sitemap.xml
methods: [GET]
public: true
flow:
- agent: fetch-all-posts
operation: data
config:
backend: d1
query: "SELECT slug, updated_at FROM posts WHERE status = 'published'"
- operation: code
handler: |
const urls = input.fetchAllPosts.map(post =>
`<url>
<loc>https://yoursite.com/blog/${post.slug}</loc>
<lastmod>${post.updated_at}</lastmod>
</url>`
).join('\n')
return {
output: `<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
${urls}
</urlset>`
}
outputs:
xml: ${code.output}
This approach gives you:
- ✅ Full control over every route
- ✅ Each route is testable independently
- ✅ Easy to add auth, rate limiting per route
- ✅ Auto-discovery finds all ensembles
- ✅ SEO-friendly (sitemaps, robots.txt)
- ✅ Dynamic content from database
- ✅ AI-powered pages via think agents
Webhook Triggers
Expose ensembles as HTTP endpoints for external services.
You own your webhook paths. You can define any path you want. We recommend using /webhooks/* paths for clarity (e.g., /webhooks/github, /webhooks/stripe).
Basic Webhook
name: data-processor
trigger:
- type: webhook
path: /webhooks/process # Recommended: /webhooks/* prefix
methods: [POST]
public: true
flow:
- agent: process-data
agents:
- name: process-data
operation: think
config:
provider: anthropic
model: claude-sonnet-4
prompt: "Process this data: ${input.data}"
outputs:
result: ${process-data.output}
Invoke via HTTP:
curl -X POST https://your-worker.workers.dev/webhooks/process \
-H "Content-Type: application/json" \
-d '{"data": "hello world"}'
Authenticated Webhook
trigger:
- type: webhook
path: /webhooks/secure-endpoint
methods: [POST, PUT]
auth:
type: bearer
secret: ${env.API_TOKEN}
Invoke with authentication:
curl -X POST https://your-worker.workers.dev/webhooks/secure-endpoint \
-H "Authorization: Bearer ${API_TOKEN}" \
-H "Content-Type: application/json" \
-d '{"action": "process"}'
Webhook Authentication Types
Bearer Token:
trigger:
- type: webhook
path: /webhooks/bearer-auth
auth:
type: bearer
secret: ${env.WEBHOOK_SECRET}
HMAC Signature (GitHub-style):
trigger:
- type: webhook
path: /webhooks/github
auth:
type: signature
secret: ${env.GITHUB_SECRET}
Sender must include:
X-Webhook-Signature: sha256=abc123...
X-Webhook-Timestamp: 1705315200
Basic Authentication:
trigger:
- type: webhook
path: /webhooks/basic-auth
auth:
type: basic
secret: ${env.BASIC_CREDS} # Format: username:password
Async Webhook Execution
For long-running ensembles, return immediately and process in background:
trigger:
- type: webhook
path: /webhooks/long-task
methods: [POST]
async: true
timeout: 300000 # 5 minutes
public: true
Returns immediately with execution ID:
{
"executionId": "exec-abc123",
"status": "processing"
}
MCP Triggers
Expose ensembles as Model Context Protocol tools for AI assistants. Conductor automatically generates MCP tool schemas from your ensemble’s inputs definition.
name: search-docs
description: "Search documentation for answers"
inputs:
query:
type: string
description: "Search query"
required: true
limit:
type: number
description: "Max results to return"
optional: true
trigger:
- type: mcp
auth:
type: bearer
secret: ${env.MCP_TOKEN} # Supports $env.VAR_NAME syntax
flow:
- agent: search
agents:
- name: search
operation: data
config:
backend: vectorize
binding: DOCS_INDEX
operation: query
vector: ${input.query}
topK: ${input.limit || 5}
outputs:
results: ${search.output}
MCP Endpoints:
GET /mcp/tools - List all ensembles exposed as MCP tools (with auto-generated input schemas)
POST /mcp/tools/{name} - Invoke an ensemble via MCP protocol
The ensemble becomes available as an MCP tool with auto-generated schema:
// AI assistants can call via MCP
{
"tool": "search-docs",
"parameters": {
"query": "how to configure webhooks",
"limit": 10
}
}
Conductor automatically converts your inputs block to MCP’s JSON Schema format:
# Your ensemble inputs
inputs:
code:
type: string
description: "Code to analyze"
required: true
language:
type: string
description: "Programming language"
optional: true
Becomes:
{
"inputSchema": {
"type": "object",
"properties": {
"code": {
"type": "string",
"description": "Code to analyze"
},
"language": {
"type": "string",
"description": "Programming language"
}
},
"required": ["code"]
}
}
Authentication Options
Bearer Token (simple or JWT):
trigger:
- type: mcp
auth:
type: bearer
secret: $env.MCP_TOKEN # Simple token comparison
If JWT_SECRET is configured in your environment, bearer tokens are validated as JWTs.
OAuth (coming soon):
trigger:
- type: mcp
auth:
type: oauth # Validates JWT format, full OAuth requires external provider
trigger:
- type: mcp
public: true # No authentication required
See MCP Integration for complete guide.
Email Triggers
Trigger ensembles via Cloudflare Email Routing. Conductor fully parses RFC822 emails including MIME multipart content and attachments.
Basic Email Trigger
name: support-ticket-router
trigger:
- type: email
to: "support@*" # Wildcard matching for address patterns
# Or use specific addresses:
# addresses:
# - [email protected]
# - [email protected]
public: false
auth:
from:
- "*@example.com" # Only accept from company domain
reply_with_output: true
flow:
- agent: classify-ticket
- agent: route-ticket
agents:
- name: classify-ticket
operation: think
config:
provider: anthropic
model: claude-sonnet-4
prompt: |
Classify this support email:
From: ${input.from}
Subject: ${input.subject}
Body: ${input.body}
- name: route-ticket
operation: http
config:
url: ${env.TICKET_SYSTEM_API}/tickets
method: POST
body:
category: ${classify-ticket.output.category}
priority: ${classify-ticket.output.priority}
content: ${input.body}
outputs:
ticketId: ${route-ticket.output.id}
category: ${classify-ticket.output.category}
Configure Cloudflare Email Routing to forward to your Worker.
Conductor parses RFC822 emails and provides structured data to your ensemble:
| Field | Type | Description |
|---|
input.from | string | Sender email address |
input.to | string | Recipient email address |
input.subject | string | Email subject line |
input.body | string | Plain text body |
input.html | string | null | HTML body (if present) |
input.headers | Record<string, string> | All email headers as key-value pairs |
input.attachments | Array<Attachment> | File attachments (see below) |
Attachment format:
{
filename: string; // Original filename
contentType: string; // MIME type (e.g., "application/pdf")
content: string; // Base64-encoded content
}
Example: Processing attachments
agents:
- name: process-invoice
operation: code
config:
handler: |
const { attachments } = context.input
const pdfAttachment = attachments.find(a => a.contentType === 'application/pdf')
if (pdfAttachment) {
// Process PDF content (base64 encoded)
return { hasPdf: true, filename: pdfAttachment.filename }
}
return { hasPdf: false }
input:
attachments: ${input.attachments}
Reply with Output
When reply_with_output: true, ensemble outputs are sent back via email:
trigger:
- type: email
addresses: [[email protected]]
reply_with_output: true
outputs:
response: ${process-email.output.response}
Queue Triggers
Process Cloudflare Queue messages in batches.
Basic Queue Consumer
name: task-processor
trigger:
- type: queue
queue: TASK_QUEUE
batch_size: 10
max_retries: 3
max_wait_time: 5 # seconds
flow:
- agent: process-batch
agents:
- name: process-batch
operation: queue
config:
mode: consume
queue: TASK_QUEUE
outputs:
processed: ${process-batch.output.count}
Queue Configuration
queue - Cloudflare Queue binding name (must match wrangler.toml)
batch_size - Maximum messages per batch (default: 10)
max_retries - Retry failed messages (default: 3)
max_wait_time - Max seconds to wait for batch to fill
Note: To send messages to queues, use the queue operation - see Queue Operation documentation.
Cron Triggers
Schedule ensemble execution with cron expressions.
Basic Cron Trigger
name: daily-report
trigger:
- type: cron
cron: "0 8 * * *" # Daily at 8 AM UTC
timezone: "America/New_York"
enabled: true
flow:
- agent: generate-report
- agent: send-email
agents:
- name: generate-report
operation: storage
config:
type: d1
query: |
SELECT COUNT(*) as orders,
SUM(total) as revenue
FROM orders
WHERE created_at >= strftime('%s', 'now', '-1 day') * 1000
- name: send-email
operation: email
config:
to: [[email protected]]
subject: "Daily Report - ${new Date().toDateString()}"
body: |
Orders: ${generate-report.output[0].orders}
Revenue: $${generate-report.output[0].revenue}
outputs:
sent: ${send-email.success}
Standard cron syntax (5 or 6 fields):
* * * * *
│ │ │ │ │
│ │ │ │ └─ Day of week (0-7, 0 and 7 are Sunday)
│ │ │ └─── Month (1-12)
│ │ └───── Day of month (1-31)
│ └─────── Hour (0-23)
└───────── Minute (0-59)
Examples:
"0 0 * * *" - Daily at midnight UTC
"0 */4 * * *" - Every 4 hours
"0 9 * * 1-5" - Weekdays at 9 AM
"0 0 1 * *" - First day of month
"0 0 * * 0" - Every Sunday
Pass data to scheduled executions:
trigger:
- type: cron
cron: "0 8 * * *"
timezone: "America/New_York"
input:
report_type: "daily"
recipients: ["[email protected]"]
metadata:
description: "Daily morning report"
team: "analytics"
Access in ensemble:
agents:
- name: process
operation: think
config:
prompt: "Generate ${input.report_type} report"
Multiple Cron Triggers
Ensembles can have multiple schedules:
trigger:
# Daily report
- type: cron
cron: "0 8 * * *"
timezone: "America/New_York"
input:
frequency: "daily"
# Weekly summary
- type: cron
cron: "0 9 * * 1"
timezone: "America/New_York"
input:
frequency: "weekly"
Disable Cron Trigger
Temporarily disable without removing:
trigger:
- type: cron
cron: "0 0 * * *"
enabled: false # Disabled
Access schedule information in ensemble:
flow:
- agent: process
input:
cron: ${input._schedule.cron}
timezone: ${input._schedule.timezone}
triggered_at: ${input._schedule.triggeredAt}
Build Triggers
Run ensembles at build time to generate static content. Build triggers execute during the ensemble conductor build command and are useful for generating documentation, static pages, or pre-computing data.
Basic Build Trigger
name: generate-docs
trigger:
- type: build
enabled: true
output: ./dist/docs
flow:
- agent: docs
input: { action: generate-openapi }
agents:
- name: docs
operation: docs
config:
format: openapi
version: 3.0.0
includeSchemas: true
outputs:
path: ${trigger.output}/openapi.json
content: ${docs.output}
Run with: ensemble conductor build
Pass data to build-time executions:
trigger:
- type: build
enabled: true
output: ./dist/static
input:
format: json
includeExamples: true
metadata:
description: "Generate API documentation"
version: "1.0.0"
flow:
- agent: docs
input:
format: ${trigger.input.format}
examples: ${trigger.input.includeExamples}
Access trigger metadata in ensemble:
agents:
- name: process
operation: think
config:
prompt: "Generate ${trigger.metadata.description} v${trigger.metadata.version}"
Multiple Build Triggers
Generate different static assets:
trigger:
# Generate OpenAPI docs
- type: build
enabled: true
output: ./dist/docs
input: { action: generate-openapi }
# Generate static site
- type: build
enabled: true
output: ./dist/site
input: { action: generate-site }
# Pre-compute analytics
- type: build
enabled: true
output: ./dist/data
input: { action: compute-analytics }
Conditional Build
Use enabled to skip builds conditionally:
trigger:
- type: build
enabled: ${env.BUILD_DOCS === 'true'}
output: ./dist/docs
CLI Triggers
Create custom CLI commands that execute ensembles. CLI triggers are invoked via ensemble conductor run <command> and support options with defaults and validation.
Basic CLI Trigger
name: generate-docs
trigger:
- type: cli
command: generate-docs
description: Generate documentation
flow:
- agent: docs
input: { action: generate-openapi }
agents:
- name: docs
operation: docs
config:
format: openapi
outputs:
result: ${docs.output}
Run with: ensemble conductor run generate-docs
CLI with Options
Define command-line options with types and defaults:
trigger:
- type: cli
command: generate-docs
description: Generate documentation in various formats
options:
- name: format
type: string
default: yaml
description: Output format (yaml, json, html)
- name: output
type: string
required: true
description: Output file path
- name: verbose
type: boolean
default: false
description: Enable verbose logging
flow:
- agent: docs
input:
format: ${trigger.options.format}
outputPath: ${trigger.options.output}
verbose: ${trigger.options.verbose}
Run with:
ensemble conductor run generate-docs --format=json --output=./docs/api.json --verbose
CLI Options Types
Supported option types:
options:
# String option
- name: format
type: string
default: yaml
# Number option
- name: limit
type: number
default: 100
# Boolean flag
- name: verbose
type: boolean
default: false
# Required option
- name: output
type: string
required: true
Access Options in Flow
CLI options are available via ${trigger.options.*}:
flow:
- operation: code
config:
handler: |
const format = context.input.format
const output = context.input.outputPath
console.log(`Generating ${format} to ${output}`)
return { success: true }
input:
format: ${trigger.options.format}
outputPath: ${trigger.options.output}
Startup Triggers
Run ensembles on Worker cold start, before HTTP routes are registered. Startup triggers are ideal for cache warming, health checks, and initialization tasks.
Cold start semantics: Cloudflare Workers naturally cold start after a few minutes of inactivity. Startup triggers run once per cold start - not on every request.
Basic Startup Trigger
name: cache-warmer
description: Pre-warm caches on Worker startup
trigger:
- type: startup
input:
cache_keys: ['products', 'categories', 'users']
flow:
- agent: warm-cache
agents:
- name: warm-cache
operation: code
config:
handler: |
for (const key of context.input.cache_keys) {
const data = await context.env.DB.prepare(`SELECT * FROM ${key} LIMIT 100`).all()
await context.env.KV.put(`cache:${key}`, JSON.stringify(data.results))
}
return { warmed: context.input.cache_keys.length }
outputs:
result: ${warm-cache.output}
Health Check on Startup
Verify dependencies are available before serving requests:
name: startup-health-check
description: Verify dependencies on startup
trigger:
- type: startup
metadata:
priority: high
flow:
- agent: check-db
- agent: check-kv
agents:
- name: check-db
operation: code
config:
handler: |
const result = await context.env.DB.prepare('SELECT 1').first()
if (!result) throw new Error('Database not responding')
return { db: 'ok' }
- name: check-kv
operation: code
config:
handler: |
await context.env.KV.put('health:startup', Date.now().toString())
return { kv: 'ok' }
outputs:
health: ${check-kv.output}
Pass static input data to startup ensembles:
trigger:
- type: startup
enabled: true
input:
warmCaches: true
preloadUsers: 100
metadata:
description: "Initialize application state"
priority: "high"
Access in ensemble:
agents:
- name: init
operation: code
config:
handler: |
if (context.input.warmCaches) {
// Pre-warm caches
}
return { initialized: true }
Disable Startup Trigger
Temporarily disable without removing:
trigger:
- type: startup
enabled: false # Disabled
input:
action: warm-cache
Keep startup triggers fast (under 5 seconds). Cloudflare Workers have a 30-second initialization timeout. While startup triggers run non-blocking via waitUntil(), slow triggers delay background task completion.
Good use cases:
- Cache warming (KV reads/writes)
- Health checks (database ping)
- Configuration loading
- Metrics initialization
Avoid:
- Heavy data processing
- Long-running API calls
- Complex AI inference
- Large file operations
Startup vs Cron
| Feature | Startup | Cron |
|---|
| When | On cold start | On schedule |
| Frequency | Variable (depends on traffic) | Predictable |
| Blocking | Non-blocking (waitUntil) | Blocking |
| Use case | Initialization | Recurring tasks |
If you need predictable timing, use cron. If you need “run once when Worker starts”, use startup.
Multiple Triggers
Ensembles can have multiple triggers of different types:
name: data-processor
trigger:
# HTTP API
- type: webhook
path: /process
methods: [POST]
auth:
type: bearer
secret: ${env.API_TOKEN}
# MCP Tool
- type: mcp
public: false
auth:
type: bearer
secret: ${env.MCP_TOKEN}
# Email
- type: email
addresses: [[email protected]]
reply_with_output: true
# Queue
- type: queue
queue: PROCESS_QUEUE
batch_size: 10
# Scheduled
- type: cron
cron: "0 */6 * * *" # Every 6 hours
timezone: "UTC"
flow:
- agent: process-data
agents:
- name: process-data
operation: think
config:
provider: anthropic
model: claude-sonnet-4
prompt: "Process: ${input}"
outputs:
result: ${process-data.output}
This ensemble can be invoked via:
- POST to
/webhooks/process
- MCP tool call
data-processor
- Email to
[email protected]
- Queue message to
PROCESS_QUEUE
- Cron schedule every 6 hours
Trigger Security
Default-Deny Policy
All triggers (except queue and cron) require either:
- Authentication (
auth configuration), OR
- Explicit public access (
public: true)
✅ Valid:
trigger:
# Has authentication
- type: webhook
path: /secure
auth:
type: bearer
secret: ${env.TOKEN}
# Explicitly public
- type: webhook
path: /public
public: true
# Queue/cron don't need auth (internal triggers)
- type: queue
queue: TASK_QUEUE
❌ Invalid:
trigger:
# ERROR: No auth and not marked public
- type: webhook
path: /unsafe
Best Practices
-
Use environment variables for secrets:
auth:
type: bearer
secret: ${env.API_TOKEN} # Never hardcode!
-
Verify webhook signatures:
Use
signature auth type for external webhooks
-
Limit email senders:
trigger:
- type: email
addresses: [[email protected]]
auth:
from: ["*@trusted-domain.com"]
-
Use async for long operations:
trigger:
- type: webhook
async: true
timeout: 300000
Configuration Reference
HTTP Trigger
| Field | Type | Required | Description |
|---|
type | "http" | Yes | Trigger type |
path | string | No | URL path with params (default: /{ensemble-name}) |
paths | array | No | Multiple paths configuration (alternative to path) |
paths[].path | string | Yes | URL path with params |
paths[].methods | string[] | Yes | HTTP methods for this path |
methods | string[] | No | HTTP methods (default: ["GET"]) |
auth | object | Conditional | Authentication config (required unless public: true) |
public | boolean | No | Allow unauthenticated access (default: false) |
rateLimit | object | No | Rate limiting configuration |
rateLimit.requests | number | Yes | Max requests per window |
rateLimit.window | number | Yes | Time window in seconds |
rateLimit.key | "ip"|"user" | No | Rate limit by IP or user (default: "ip") |
cors | object | No | CORS configuration |
cors.origin | string|string[] | No | Allowed origins |
cors.credentials | boolean | No | Allow credentials |
responses | object | No | Response type configuration |
responses.html | object | No | HTML response config |
responses.json | object | No | JSON response config |
templateEngine | "liquid"|"handlebars"|"simple" | No | Template engine for HTML (default: "liquid") |
middleware | function[] | No | Custom Hono middleware |
Webhook Trigger
| Field | Type | Required | Description |
|---|
type | "webhook" | Yes | Trigger type |
path | string | No | Endpoint path (default: /{ensemble-name}) |
methods | string[] | No | HTTP methods (default: ["POST"]) |
auth | object | Conditional | Authentication config (required unless public: true) |
public | boolean | No | Allow unauthenticated access (default: false) |
async | boolean | No | Background execution (default: false) |
timeout | number | No | Timeout in milliseconds |
MCP Trigger
| Field | Type | Required | Description |
|---|
type | "mcp" | Yes | Trigger type |
auth | object | Conditional | Authentication config (required unless public: true) |
public | boolean | No | Allow unauthenticated access (default: false) |
Email Trigger
| Field | Type | Required | Description |
|---|
type | "email" | Yes | Trigger type |
addresses | string[] | Yes | Email addresses to receive |
auth | object | No | Sender whitelist patterns |
public | boolean | No | Allow any sender (default: false) |
reply_with_output | boolean | No | Send output via email reply (default: false) |
Queue Trigger
| Field | Type | Required | Description |
|---|
type | "queue" | Yes | Trigger type |
queue | string | Yes | Queue binding name |
batch_size | number | No | Max messages per batch (default: 10) |
max_retries | number | No | Retry failed messages (default: 3) |
max_wait_time | number | No | Max seconds to wait for batch |
Cron Trigger
| Field | Type | Required | Description |
|---|
type | "cron" | Yes | Trigger type |
cron | string | Yes | Cron expression |
timezone | string | No | IANA timezone (default: "UTC") |
enabled | boolean | No | Enable trigger (default: true) |
input | object | No | Default input data |
metadata | object | No | Schedule metadata |
Build Trigger
| Field | Type | Required | Description |
|---|
type | "build" | Yes | Trigger type |
enabled | boolean | No | Enable trigger (default: true) |
output | string | Yes | Output directory path |
input | object | No | Default input data |
metadata | object | No | Build metadata |
CLI Trigger
| Field | Type | Required | Description |
|---|
type | "cli" | Yes | Trigger type |
command | string | Yes | Command name |
description | string | No | Command description |
options | array | No | Command-line options |
options[].name | string | Yes | Option name |
options[].type | "string"|"number"|"boolean" | Yes | Option type |
options[].default | any | No | Default value |
options[].required | boolean | No | Whether option is required (default: false) |
options[].description | string | No | Option description |
Startup Trigger
| Field | Type | Required | Description |
|---|
type | "startup" | Yes | Trigger type |
enabled | boolean | No | Enable trigger (default: true) |
input | object | No | Static input data for startup execution |
metadata | object | No | Additional metadata (e.g., priority, description) |
The format field in the output block controls response serialization and Content-Type headers. Use this for non-JSON responses like CSV, XML, YAML, etc.
| Type | Content-Type | Description |
|---|
json | application/json | JSON serialization (default) |
text | text/plain | Plain text |
html | text/html | HTML content |
xml | application/xml | XML content |
csv | text/csv | CSV serialization from arrays |
markdown | text/markdown | Markdown content |
yaml | application/x-yaml | YAML serialization |
ics | text/calendar | iCalendar format |
rss | application/rss+xml | RSS feed |
atom | application/atom+xml | Atom feed |
CSV Export Example
name: export-users
trigger:
- type: http
path: /export/users.csv
methods: [GET]
public: true
agents:
- name: fetch-users
operation: data
config:
backend: d1
query: SELECT id, name, email FROM users
output:
status: 200
headers:
Content-Disposition: attachment; filename="users.csv"
format:
type: csv
extract: users
body:
users: ${fetch-users.output}
YAML Config Example
name: app-config
trigger:
- type: http
path: /config.yaml
methods: [GET]
public: true
agents:
- name: build-config
operation: transform
config:
expression: |
{
"version": "1.0",
"features": ["auth", "analytics"]
}
output:
status: 200
format:
type: yaml
extract: config
body:
config: ${build-config.output}
iCalendar Event
name: calendar-event
trigger:
- type: http
path: /event.ics
methods: [GET]
public: true
output:
status: 200
headers:
Content-Disposition: attachment; filename="event.ics"
format: ics
body:
calendar: ${generate-event.output}
The extract option specifies which field from the body should be serialized. If not specified, the entire body is serialized.
Triggers vs API Routes
Conductor provides two ways to execute ensembles:
Triggers (This Page)
Triggers are defined in ensemble YAML and provide:
- Path-based routing with parameters (
/users/:id)
- Per-trigger authentication configuration
- Rate limiting and CORS settings
- Auto-discovery from ensemble definitions
trigger:
- type: http
path: /api/users/:id
public: true # Explicit public access
API Execute Routes
The /api/v1/execute/* routes provide programmatic access:
# Execute via API (requires authentication by default)
curl -X POST https://your-worker.workers.dev/api/v1/execute/ensemble/my-workflow \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{"input": {}}'
API Execution Control:
You can control which ensembles are accessible via the Execute API using:
-
Project-level policy in
conductor.config.ts:
api: {
execution: {
ensembles: { requireExplicit: true } // Require opt-in
}
}
-
Per-ensemble control via
apiExecutable:
name: internal-workflow
apiExecutable: false # Prevent Execute API access
Key Differences:
| Feature | Triggers | API Routes |
|---|
| Configuration | Per-trigger in YAML | Global in conductor.config.ts |
| Default Auth | Requires public: true or auth | Requires auth (secure by default) |
| Use Case | Public APIs, webhooks | Service-to-service, internal |
| Permissions | Per-trigger auth | Permission-based scoping |
| Access Control | Trigger config | apiExecutable flag |
See Security & Authentication for complete auth documentation.
Next Steps