Overview
The ThinkMember class enables AI-powered reasoning, planning, and decision-making within workflows. It uses Cloudflare Workers AI or external LLM providers for intelligent processing.
import { ThinkMember } from '@ensemble-edge/conductor';
const thinker = new ThinkMember({
name: 'analyze-sentiment',
config: {
model: '@cf/meta/llama-3.1-8b-instruct',
prompt: 'Analyze the sentiment of: ${input.text}'
}
});
const result = await thinker.execute({
text: 'I love this product!'
});
Constructor
new ThinkMember(options: ThinkMemberOptions)
options
ThinkMemberOptions
required
Think member configuration (extends MemberOptions)AI configurationPrompt template with expressions
options.config.temperature
Sampling temperature (0-1)
options.config.systemPrompt
System prompt for context
options.config.provider
string
default:"cloudflare"
AI provider: cloudflare, openai, anthropic
options.config.responseFormat
Response format: text, json, structured
interface ThinkConfig {
model: string;
prompt: string;
temperature?: number;
maxTokens?: number;
systemPrompt?: string;
provider?: 'cloudflare' | 'openai' | 'anthropic' | 'custom';
responseFormat?: 'text' | 'json' | 'structured';
schema?: JSONSchema;
tools?: Tool[];
}
Methods
execute()
Execute AI reasoning task.
async execute(input: any): Promise<ThinkResult>
Input data for the prompt
Returns: Promise<ThinkResult>
interface ThinkResult {
content: string;
reasoning?: string;
confidence?: number;
metadata?: {
model: string;
tokens: number;
duration: number;
};
}
Example:
const result = await thinker.execute({
text: 'Customer feedback: Great service but shipping was slow.',
categories: ['service', 'shipping', 'product']
});
console.log(result.content); // AI-generated response
console.log(result.confidence); // 0.95
console.log(result.metadata.tokens); // 156
buildPrompt()
Build final prompt from template and input.
buildPrompt(input: any): string
Returns: string - Rendered prompt
Example:
const thinker = new ThinkMember({
name: 'classify',
config: {
model: '@cf/meta/llama-3.1-8b-instruct',
prompt: `
Classify the following text into one of these categories: ${input.categories.join(', ')}
Text: ${input.text}
Category:
`.trim()
}
});
const prompt = thinker.buildPrompt({
text: 'Hello world',
categories: ['greeting', 'question', 'statement']
});
console.log(prompt);
// Classify the following text into one of these categories: greeting, question, statement
//
// Text: Hello world
//
// Category:
Configuration Examples
Sentiment Analysis
- member: analyze-sentiment
type: Think
config:
model: '@cf/meta/llama-3.1-8b-instruct'
prompt: |
Analyze the sentiment of the following text.
Return only: positive, negative, or neutral.
Text: ${input.text}
Sentiment:
temperature: 0.3
input:
text: ${input.review}
Text Classification
- member: classify-ticket
type: Think
config:
model: '@cf/meta/llama-3.1-8b-instruct'
systemPrompt: "You are a customer support ticket classifier."
prompt: |
Classify this support ticket into one category:
- technical
- billing
- general
Ticket: ${input.description}
Category:
temperature: 0.2
input:
description: ${input.ticketText}
JSON Response
- member: extract-entities
type: Think
config:
model: '@cf/meta/llama-3.1-8b-instruct'
responseFormat: json
schema:
type: object
properties:
entities:
type: array
items:
type: object
properties:
name: { type: string }
type: { type: string }
confidence: { type: number }
prompt: |
Extract named entities from this text and return as JSON:
${input.text}
input:
text: ${input.document}
Multi-Step Reasoning
- member: plan-actions
type: Think
config:
model: '@cf/meta/llama-3.1-8b-instruct'
systemPrompt: |
You are an AI assistant that breaks down complex tasks into steps.
Think through the problem step by step.
prompt: |
Task: ${input.task}
Available tools: ${input.tools.join(', ')}
Create a step-by-step plan to accomplish this task:
temperature: 0.7
maxTokens: 2048
input:
task: ${input.userRequest}
tools: ${available-tools.output.list}
AI Providers
Cloudflare Workers AI
const thinker = new ThinkMember({
name: 'cloudflare-ai',
config: {
provider: 'cloudflare',
model: '@cf/meta/llama-3.1-8b-instruct',
prompt: '${input.prompt}'
}
});
Available models:
@cf/meta/llama-3.1-8b-instruct - Fast, general purpose
@cf/meta/llama-3.1-70b-instruct - Larger, more capable
@cf/mistral/mistral-7b-instruct-v0.1 - Fast instruction following
@cf/openchat/openchat-3.5-0106 - Chat optimized
OpenAI
const thinker = new ThinkMember({
name: 'openai',
config: {
provider: 'openai',
model: 'gpt-4-turbo-preview',
prompt: '${input.prompt}',
apiKey: '${env.OPENAI_API_KEY}'
}
});
Anthropic Claude
const thinker = new ThinkMember({
name: 'claude',
config: {
provider: 'anthropic',
model: 'claude-3-sonnet-20240229',
prompt: '${input.prompt}',
apiKey: '${env.ANTHROPIC_API_KEY}'
}
});
Advanced Features
Structured Output
Force JSON schema conformance:
- member: extract-data
type: Think
config:
model: '@cf/meta/llama-3.1-8b-instruct'
responseFormat: structured
schema:
type: object
required: [name, email, intent]
properties:
name: { type: string }
email: { type: string, format: email }
intent: { type: string, enum: [purchase, support, inquiry] }
prompt: |
Extract structured information from this message:
${input.message}
Enable AI to use tools:
- member: research-assistant
type: Think
config:
model: '@cf/meta/llama-3.1-8b-instruct'
tools:
- name: search_web
description: Search the web for information
parameters:
query: { type: string, description: Search query }
- name: calculate
description: Perform mathematical calculations
parameters:
expression: { type: string, description: Math expression }
prompt: |
Answer this question using available tools:
${input.question}
Few-Shot Learning
Provide examples in the prompt:
- member: few-shot-classifier
type: Think
config:
model: '@cf/meta/llama-3.1-8b-instruct'
prompt: |
Classify customer messages as urgent or not urgent.
Examples:
Input: "ASAP! System is down!"
Output: urgent
Input: "Quick question about pricing"
Output: not_urgent
Input: "Cannot access my account!!!"
Output: urgent
Now classify this:
Input: ${input.message}
Output:
Chain of Thought
Encourage step-by-step reasoning:
- member: complex-reasoning
type: Think
config:
model: '@cf/meta/llama-3.1-8b-instruct'
systemPrompt: |
Think through problems step by step.
Show your reasoning before giving an answer.
prompt: |
${input.question}
Let's think step by step:
temperature: 0.7
Self-Consistency
Run multiple times and pick most common answer:
- member: self-consistent
type: Think
config:
model: '@cf/meta/llama-3.1-8b-instruct'
prompt: ${input.question}
samples: 5 # Generate 5 responses
consistency: majority # Pick most common answer
Response Parsing
Text Response
const result = await thinker.execute({ text: 'Hello' });
console.log(result.content); // "Hello! How can I help you?"
JSON Response
const thinker = new ThinkMember({
name: 'json-extractor',
config: {
model: '@cf/meta/llama-3.1-8b-instruct',
responseFormat: 'json',
prompt: 'Extract data from: ${input.text}'
}
});
const result = await thinker.execute({ text: '...' });
const data = JSON.parse(result.content);
console.log(data.entities);
Structured Response
const thinker = new ThinkMember({
name: 'structured',
config: {
model: '@cf/meta/llama-3.1-8b-instruct',
responseFormat: 'structured',
schema: {
type: 'object',
properties: {
sentiment: { type: 'string', enum: ['positive', 'negative', 'neutral'] },
confidence: { type: 'number', minimum: 0, maximum: 1 }
}
},
prompt: 'Analyze: ${input.text}'
}
});
const result = await thinker.execute({ text: '...' });
// result.content is automatically parsed and validated
console.log(result.content.sentiment);
console.log(result.content.confidence);
Cost Optimization
Caching
Cache AI responses for repeated inputs:
- member: cached-analysis
type: Think
config:
model: '@cf/meta/llama-3.1-8b-instruct'
prompt: 'Analyze: ${input.text}'
cache:
enabled: true
ttl: 3600000 # 1 hour
key: ${input.text}
Smaller Models
Use smaller models for simple tasks:
# Fast, cheap model for classification
- member: classify
type: Think
config:
model: '@cf/meta/llama-3.1-8b-instruct'
prompt: 'Classify: ${input.text}'
# Larger model only when needed
- member: complex-analysis
type: Think
condition: ${classify.output.confidence < 0.8}
config:
model: '@cf/meta/llama-3.1-70b-instruct'
prompt: 'Detailed analysis: ${input.text}'
Token Limits
Set appropriate token limits:
- member: summarize
type: Think
config:
model: '@cf/meta/llama-3.1-8b-instruct'
maxTokens: 256 # Short summary
prompt: 'Summarize in 2-3 sentences: ${input.article}'
Error Handling
try {
const result = await thinker.execute({ text: 'Hello' });
} catch (error) {
if (error instanceof AIProviderError) {
console.error('AI provider error:', {
provider: error.provider,
model: error.model,
message: error.message
});
}
}
Testing
import { ThinkMember } from '@ensemble-edge/conductor';
import { describe, it, expect } from 'vitest';
describe('ThinkMember', () => {
it('classifies sentiment', async () => {
const thinker = new ThinkMember({
name: 'sentiment',
config: {
model: '@cf/meta/llama-3.1-8b-instruct',
prompt: 'Sentiment of "${input.text}": ',
temperature: 0.1
}
});
const result = await thinker.execute({
text: 'I love this!'
});
expect(result.content.toLowerCase()).toContain('positive');
});
it('returns structured data', async () => {
const thinker = new ThinkMember({
name: 'extractor',
config: {
model: '@cf/meta/llama-3.1-8b-instruct',
responseFormat: 'json',
prompt: 'Extract name from: ${input.text}'
}
});
const result = await thinker.execute({
text: 'My name is Alice'
});
const data = JSON.parse(result.content);
expect(data.name).toBe('Alice');
});
});
Best Practices
- Use appropriate models - Match task complexity
- Set lower temperature - For deterministic tasks
- Cache results - Reduce costs and latency
- Validate output - Don’t trust AI blindly
- Provide examples - Improve accuracy
- Set token limits - Control costs
- Handle errors - AI can fail
- Monitor usage - Track tokens and costs
- Test thoroughly - AI responses vary
- Use structured output - For parsing reliability