Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.agentastic.dev/llms.txt

Use this file to discover all available pages before exploring further.

Agentastic includes integrated AI code review so you can get feedback on your changes without leaving the editor. One click sends your diff to one or more AI agents, each of which analyzes your code and reports findings in a terminal tab. You can run multiple agents in parallel to get diverse perspectives on the same change.

Supported agents

Claude Code

Anthropic’s Claude reviews code with strong understanding of intent, architecture, and best practices. Command: claude "$(cat 'prompt_file')" Requirements:
  • Claude Code CLI installed: npm install -g @anthropic-ai/claude-code
  • Anthropic API key configured

Codex

OpenAI’s Codex agent analyzes code patterns and makes targeted improvement suggestions. Command: codex review "$(cat 'prompt_file')" Requirements:
  • Codex CLI installed: npm install -g @openai/codex
  • OpenAI API key configured

CodeRabbit

CodeRabbit provides automated review focused on PR-level feedback and code quality checks. Command: coderabbit review --plain Requirements:
  • CodeRabbit CLI installed
  • CodeRabbit account connected

Cursor Agent

Cursor’s AI agent for code review. Supports sandbox control and cloud offloading. Command: agent --model auto --print "$(cat 'prompt_file')"

Running a code review

From the toolbar

  • Click the Code Review button to run all agents enabled in Settings > Code Review.
  • Hold or right-click the button to select specific agents for this review only.

Step by step

1

Make your code changes

Edit files as usual. The review analyzes whatever is in your current diff.
2

Click Code Review

Click the Code Review button in the toolbar.
3

Select agents (optional)

Hold the button or use the menu to pick specific agents, or use your defaults from Settings.
4

Read the feedback

Each agent opens in its own terminal tab and prints its findings. Review the output and address suggestions before committing.

What gets reviewed

The review prompt sent to each agent includes:
  • A comparison of your current branch against the target branch
  • The complete unified diff of all changes
  • The commit history between the two branches
  • Prioritized review criteria

Review criteria

Agents are instructed to prioritize findings in this order:
  1. Bugs — logic errors, edge cases, missing null checks
  2. Security — vulnerabilities, missing input validation, accidental secret exposure
  3. Performance — inefficiencies, memory leaks, N+1 queries
  4. Maintainability — code clarity, documentation gaps, inconsistent patterns
  5. Testing — missing test coverage, low-quality tests

Multi-agent reviews

Running multiple agents in parallel is the best way to catch a wider range of issues. Different agents have different strengths:
  • Claude excels at understanding intent and identifying architectural concerns.
  • Codex focuses on code patterns and adherence to best practices.
  • CodeRabbit specializes in PR-level feedback and incremental review.
To enable multiple agents:
1

Open Settings > Code Review

Use Cmd+, to open Settings, then navigate to the Code Review section.
2

Enable agents

Toggle on each agent you want to include in reviews.
3

Click Review

All enabled agents run in parallel. Each opens its own terminal tab with its output.

Adding custom agents

You can integrate any terminal-based AI tool or internal review script as a custom code review agent.

Configuration

1

Open Settings > Code Review

Navigate to the Code Review section in Settings.
2

Scroll to Custom Agents and click Add Agent

A form appears for the agent’s name and command.
3

Enter the agent name and command

  • Name — the display name shown in the selector.
  • Command — the shell command to run.

Command templates

Use {prompt} as a placeholder for the review prompt in your command:
my-agent review --prompt "{prompt}"
If your command does not include {prompt}, Agentastic passes the prompt via a temporary file instead:
my-agent "$(cat 'prompt_file')"

Examples

ollama run codellama "$(cat 'prompt_file')"

Settings reference

Configure code review in Settings > Code Review:
SettingDescription
Enabled AgentsToggle which built-in agents run when you click Review
Custom AgentsAdd and manage your own review commands

Typical workflow

A standard code review session looks like this:
1

Make your changes

Write or edit code as part of a feature or bug fix.
2

Inspect your diff

Open the Diff Viewer to confirm the changes look correct before asking for a review.
3

Run AI review

Click Review to send the diff to your enabled agents.
4

Address feedback

Work through the suggestions in each terminal tab.
5

Commit

Commit once you’re satisfied with the result.
Review smaller, focused changes to get more precise feedback. Large, multi-purpose diffs can produce less actionable output.
AI review complements, but does not replace, human code review. Use both for the best results.