Prompt Chaining
Prompt chaining is a pattern where multiple LLM calls are executed in sequence, with each call building upon the output of the previous one.
Basic Structure
states:
- id: start
type: initial
next: generate_content
- id: generate_content
type: llm
model: main_model
prompt: |
Generate content about this topic:
{topic}
input:
topic: "${input.topic}"
next: refine_content
- id: refine_content
type: llm
model: main_model
prompt: |
Refine this content:
{content}
Make it more engaging and professional.
input:
content: "${previous.output}"
next: end
- id: end
type: output
value: "${previous.output}"Common Use Cases
1. Content Generation and Translation
states:
- id: start
type: initial
next: write_marketing_copy
- id: write_marketing_copy
type: llm
model: main_model
prompt: |
Write marketing copy for the following product:
{product_description}
The copy should highlight these key features:
{features}
input:
product_description: "${input.product}"
features: "${input.features}"
next: translate_copy
- id: translate_copy
type: llm
model: main_model
prompt: |
Translate the following marketing copy to {language}:
{copy}
input:
copy: "${previous.output}"
language: "${input.target_language}"
next: end2. Code Generation and Review
states:
- id: start
type: initial
next: generate_code
- id: generate_code
type: llm
model: main_model
prompt: |
Generate code for this function:
{function_description}
Use {language} and follow these requirements:
{requirements}
input:
function_description: "${input.description}"
language: "${input.language}"
requirements: "${input.requirements}"
next: review_code
- id: review_code
type: llm
model: main_model
prompt: |
Review this code for best practices and potential issues:
{code}
Provide feedback in the following format:
1. Code Quality
2. Security Concerns
3. Performance Issues
4. Suggested Improvements
input:
code: "${previous.output}"
next: end3. Data Analysis and Visualization
states:
- id: start
type: initial
next: analyze_data
- id: analyze_data
type: llm
model: main_model
prompt: |
Analyze this dataset:
{data}
Provide insights about:
1. Key trends
2. Anomalies
3. Correlations
input:
data: "${input.dataset}"
next: generate_visualization
- id: generate_visualization
type: llm
model: main_model
prompt: |
Based on this analysis, suggest the best visualization:
Analysis: {analysis}
Provide:
1. Chart type
2. Data mapping
3. Color scheme
4. Annotations
input:
analysis: "${previous.output}"
next: endAdvanced Patterns
1. Conditional Chaining
states:
- id: start
type: initial
next: generate_content
- id: generate_content
type: llm
model: main_model
prompt: "Generate content: {topic}"
input:
topic: "${input.topic}"
transitions:
- when: "${output.length < 100}"
next: expand_content
- next: refine_content
- id: expand_content
type: llm
model: main_model
prompt: |
Expand this content with more details:
{content}
input:
content: "${previous.output}"
next: refine_content
- id: refine_content
type: llm
model: main_model
prompt: |
Refine this content:
{content}
input:
content: "${previous.output}"
next: end2. Parallel Chaining
states:
- id: start
type: initial
next: generate_content
- id: generate_content
type: llm
model: main_model
prompt: "Generate content: {topic}"
input:
topic: "${input.topic}"
next: parallel_processing
- id: parallel_processing
type: parallel
branches:
- id: check_grammar
type: llm
model: grammar_model
prompt: "Check grammar: {content}"
- id: check_tone
type: llm
model: tone_model
prompt: "Check tone: {content}"
input:
content: "${states.generate_content.output}"
next: combine_feedback
- id: combine_feedback
type: llm
model: main_model
prompt: |
Apply these improvements to the content:
Original: {original}
Grammar: {grammar}
Tone: {tone}
input:
original: "${states.generate_content.output}"
grammar: "${branches.check_grammar.output}"
tone: "${branches.check_tone.output}"
next: endBest Practices
- Clear Purpose: Each prompt should have a clear, specific purpose
- Context Preservation: Pass necessary context between prompts
- Error Handling: Implement fallbacks for each step
- Output Format: Define clear output formats for each step
- Model Selection: Choose appropriate models for each task
- Testing: Test each step independently and as a chain
Common Pitfalls
- Context Loss: Not passing enough context between prompts
- Over-complexity: Creating unnecessarily long chains
- Inconsistent Formats: Not maintaining consistent output formats
- Error Propagation: Not handling errors at each step
- Resource Usage: Not optimizing model usage
Next Steps
- Learn about Routing
- Explore Parallelization
- Read Best Practices
Last updated on