Agentastic includes integrated AI code review so you can get feedback on your changes without leaving the editor. One click sends your diff to one or more AI agents, each of which analyzes your code and reports findings in a terminal tab. You can run multiple agents in parallel to get diverse perspectives on the same change.Documentation Index
Fetch the complete documentation index at: https://docs.agentastic.dev/llms.txt
Use this file to discover all available pages before exploring further.
Supported agents
Claude Code
Anthropic’s Claude reviews code with strong understanding of intent, architecture, and best practices. Command:claude "$(cat 'prompt_file')"
Requirements:
- Claude Code CLI installed:
npm install -g @anthropic-ai/claude-code - Anthropic API key configured
Codex
OpenAI’s Codex agent analyzes code patterns and makes targeted improvement suggestions. Command:codex review "$(cat 'prompt_file')"
Requirements:
- Codex CLI installed:
npm install -g @openai/codex - OpenAI API key configured
CodeRabbit
CodeRabbit provides automated review focused on PR-level feedback and code quality checks. Command:coderabbit review --plain
Requirements:
- CodeRabbit CLI installed
- CodeRabbit account connected
Cursor Agent
Cursor’s AI agent for code review. Supports sandbox control and cloud offloading. Command:agent --model auto --print "$(cat 'prompt_file')"
Running a code review
From the toolbar
- Click the Code Review button to run all agents enabled in Settings > Code Review.
- Hold or right-click the button to select specific agents for this review only.
Step by step
Select agents (optional)
Hold the button or use the menu to pick specific agents, or use your defaults from Settings.
What gets reviewed
The review prompt sent to each agent includes:- A comparison of your current branch against the target branch
- The complete unified diff of all changes
- The commit history between the two branches
- Prioritized review criteria
Review criteria
Agents are instructed to prioritize findings in this order:- Bugs — logic errors, edge cases, missing null checks
- Security — vulnerabilities, missing input validation, accidental secret exposure
- Performance — inefficiencies, memory leaks, N+1 queries
- Maintainability — code clarity, documentation gaps, inconsistent patterns
- Testing — missing test coverage, low-quality tests
Multi-agent reviews
Running multiple agents in parallel is the best way to catch a wider range of issues. Different agents have different strengths:- Claude excels at understanding intent and identifying architectural concerns.
- Codex focuses on code patterns and adherence to best practices.
- CodeRabbit specializes in PR-level feedback and incremental review.
Adding custom agents
You can integrate any terminal-based AI tool or internal review script as a custom code review agent.Configuration
Command templates
Use{prompt} as a placeholder for the review prompt in your command:
{prompt}, Agentastic passes the prompt via a temporary file instead:
Examples
Settings reference
Configure code review in Settings > Code Review:| Setting | Description |
|---|---|
| Enabled Agents | Toggle which built-in agents run when you click Review |
| Custom Agents | Add and manage your own review commands |
Typical workflow
A standard code review session looks like this:Inspect your diff
Open the Diff Viewer to confirm the changes look correct before asking for a review.
AI review complements, but does not replace, human code review. Use both for the best results.