Code Review Assistant
Code Review Assistant
This example demonstrates how to build an autonomous code review assistant using NudgeLang. The assistant can analyze code changes, identify potential issues, and provide constructive feedback.
Overview
The Code Review Assistant is designed to:
- Analyze code changes
- Identify potential issues
- Check code quality
- Suggest improvements
- Enforce best practices
- Generate review reports
Implementation
name: code_review_assistant
version: 1.0.0
description: Autonomous code review assistant
states:
- id: start
type: initial
next: analyze_changes
- id: analyze_changes
type: llm
model: analyzer
prompt: |
Analyze these code changes:
{changes}
Consider:
1. Type of changes
2. Impact scope
3. Complexity level
4. Risk factors
input:
changes: "${input.diff}"
next: check_quality
- id: check_quality
type: llm
model: quality_checker
prompt: |
Check code quality for:
{changes}
Evaluate:
- Code style
- Best practices
- Performance
- Security
- Maintainability
input:
changes: "${input.diff}"
next: identify_issues
- id: identify_issues
type: llm
model: issue_finder
prompt: |
Identify potential issues in:
{changes}
Look for:
- Bugs
- Security vulnerabilities
- Performance problems
- Code smells
- Anti-patterns
input:
changes: "${input.diff}"
next: generate_feedback
- id: generate_feedback
type: llm
model: feedback_generator
prompt: |
Generate review feedback for:
Changes: {changes}
Quality Issues: {quality_issues}
Potential Issues: {potential_issues}
Guidelines:
- Be constructive
- Provide examples
- Suggest improvements
- Reference standards
input:
changes: "${input.diff}"
quality_issues: "${states.check_quality.output}"
potential_issues: "${states.identify_issues.output}"
next: evaluate_feedback
- id: evaluate_feedback
type: llm
model: evaluator
prompt: |
Evaluate feedback quality:
Feedback: {feedback}
Changes: {changes}
Issues: {issues}
Check for:
- Clarity
- Actionability
- Completeness
- Tone
input:
feedback: "${previous.output}"
changes: "${input.diff}"
issues: "${states.identify_issues.output}"
transitions:
- when: "${output.quality === 'high'}"
next: generate_report
- when: "${output.quality === 'medium'}"
next: improve_feedback
- when: "${output.quality === 'low'}"
next: request_human_review
- id: improve_feedback
type: llm
model: improver
prompt: |
Improve this feedback:
{feedback}
Issues to address:
{issues}
input:
feedback: "${states.generate_feedback.output}"
issues: "${previous.output.issues}"
next: evaluate_feedback
- id: generate_report
type: llm
model: report_generator
prompt: |
Generate review report:
Changes: {changes}
Feedback: {feedback}
Issues: {issues}
Include:
- Summary
- Key findings
- Recommendations
- Action items
input:
changes: "${input.diff}"
feedback: "${states.generate_feedback.output}"
issues: "${states.identify_issues.output}"
next: send_report
- id: send_report
type: tool
tool: send_review
parameters:
report: "${previous.output}"
channel: "${input.channel}"
reviewers: "${input.reviewers}"
next: check_followup
- id: check_followup
type: llm
model: analyzer
prompt: |
Check if follow-up is needed:
Report: {report}
Changes: {changes}
Review History: {history}
input:
report: "${states.generate_report.output}"
changes: "${input.diff}"
history: "${context.review_history}"
transitions:
- when: "${output.needs_followup}"
next: plan_followup
- next: end
- id: plan_followup
type: llm
model: planner
prompt: |
Plan follow-up action:
Report: {report}
Changes: {changes}
Follow-up Type: {type}
input:
report: "${states.generate_report.output}"
changes: "${input.diff}"
type: "${previous.output.followup_type}"
next: execute_followup
- id: execute_followup
type: tool
tool: "${previous.output.tool}"
parameters: "${previous.output.parameters}"
next: end
- id: request_human_review
type: tool
tool: create_review_request
parameters:
changes: "${input.diff}"
context: "${context.review_context}"
priority: "${states.analyze_changes.output.priority}"
next: notify_reviewer
- id: notify_reviewer
type: tool
tool: send_notification
parameters:
message: "Human review requested for changes"
reviewer: "${previous.output.assigned_reviewer}"
next: end
- id: end
type: output
value: "${context.final_report}"Key Features
Change Analysis
- Identifies change types
- Assesses impact
- Evaluates complexity
- Determines risk level
Quality Assessment
- Checks code style
- Verifies best practices
- Evaluates performance
- Assesses security
- Reviews maintainability
Issue Detection
- Finds potential bugs
- Identifies vulnerabilities
- Spots performance issues
- Detects code smells
- Recognizes anti-patterns
Feedback Generation
- Provides constructive feedback
- Suggests improvements
- References standards
- Includes examples
Report Generation
- Creates comprehensive reports
- Summarizes findings
- Lists recommendations
- Specifies action items
Best Practices
Code Analysis
- Use static analysis tools
- Follow language standards
- Consider project context
- Review dependencies
Feedback Quality
- Be specific and clear
- Provide examples
- Explain reasoning
- Suggest alternatives
Review Process
- Set clear criteria
- Maintain consistency
- Track review history
- Follow up on issues
Performance Monitoring
- Track review times
- Measure issue detection
- Monitor feedback quality
- Analyze common issues
Common Use Cases
Pull Request Review
- Change analysis
- Quality checks
- Issue detection
- Feedback generation
Code Quality Audit
- Style checking
- Best practice verification
- Performance analysis
- Security assessment
Refactoring Review
- Change impact analysis
- Quality improvement
- Issue prevention
- Best practice enforcement
Security Review
- Vulnerability detection
- Security best practices
- Risk assessment
- Compliance checking
Next Steps
- Learn about Content Moderation
- Explore Best Practices
- Read about Testing
Last updated on