Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.agentastic.dev/llms.txt

Use this file to discover all available pages before exploring further.

With traditional AI coding, you wait for one agent to finish before starting the next — your time is blocked while the agent works. Multi-agent programming breaks that bottleneck. You launch several agents at once, each tackling a separate task in its own isolated environment, and you review the results when they are ready. Agentastic makes this possible through git worktrees and containers, giving every agent a clean, conflict-free workspace.

Why run multiple agents?

Traditional serial workflow:
You → Agent → Wait → Review → You → Agent → Wait → Review...
You are blocked while the agent works. One task at a time. Multi-agent parallel workflow:
You → Agent 1 (auth feature)     ↘
You → Agent 2 (API endpoints)    → All working in parallel
You → Agent 3 (test coverage)    ↗
Multiple tasks progress simultaneously. Review each one when it is ready.

Setting up a multi-agent workflow

1

Plan your tasks

Break your work into independent pieces. Each agent needs a clearly scoped task that does not depend on another agent finishing first.
TaskAgentBranch
User authenticationClaudefeature-auth
API endpointsCodexfeature-api
Database schemaClaudefeature-db
Unit testsAiderfeature-tests
2

Launch agents from Agent Home

  1. Open Agent Home
  2. Write your first task prompt
  3. Select the agent and instance count
  4. Click Send
Repeat for each task, or use multi-instance launching to spin up several agents at once:
  • Select multiple agents in the picker
  • Set instance counts (for example, Claude ×2, Codex ×1)
  • Each instance gets its own worktree automatically
3

Monitor progress

Track all running agents in the Agents navigator tab:
  • See all active worktrees and their status
  • Check terminal output for each agent
  • Switch between agents with Cmd+Option+Down / Cmd+Option+Up
4

Review and merge

As each agent completes its task:
  1. Switch to the agent’s worktree
  2. Review changes in the Diff Viewer
  3. Run Code Review for AI feedback
  4. Create a PR or merge directly into your main branch

Parallel agent strategies

Split a large feature into independent parts and assign each to a different agent. This is the most common multi-agent pattern.
Feature: User Dashboard

Agent 1: Backend API
- Create dashboard endpoints
- Add data aggregation

Agent 2: Frontend components
- Build dashboard UI
- Add charts and widgets

Agent 3: Tests
- Write API tests
- Write component tests
Each agent works on a distinct layer of the stack, so there is minimal risk of overlapping file changes.
Launch two agents with the same goal but different instructions to get competing implementations. Compare the results and pick the best one.
Task: Implement caching

Agent 1 (Claude): "Implement Redis-based caching"
Agent 2 (Codex):  "Implement in-memory caching"

→ Compare approaches, pick the best
This is useful when you are unsure which approach will suit your architecture.
Chain agent outputs so that each agent builds on the previous one’s work.
1. Agent 1: Generate the initial implementation
2. You review and provide feedback
3. Agent 2: Refactor based on your feedback
4. Agent 3: Add tests and documentation
Unlike a purely parallel workflow, this strategy introduces intentional sequencing at the review steps — you control when each stage begins.
Use agents to review each other’s work before you do your final pass.
1. Agent 1: Implement the feature
2. Agent 2: Review Agent 1's code
3. Agent 3: Write tests for the feature
4. You: Final review and merge
This gives you a structured quality gate without having to review every line yourself first.

Best practices

Keep tasks independent

Good task splits avoid dependencies between agents:
  • Auth system (independent)
  • Payment processing (independent)
  • Email notifications (independent)
Avoid splits where one agent’s output is another’s input at the start:
  • Create user model (other tasks depend on this)
  • Add user validation (depends on the model)
  • Build user API (depends on both)
For dependent tasks, run them sequentially — launch the next agent only after merging the previous one’s output.

Use clear branch names

Descriptive branch names make it easier to track what each agent is working on:
feature-auth-backend
feature-auth-frontend
feature-payments-api
bugfix-login-timeout

Give each agent proper context

  • Use @ mentions to reference relevant files in your prompt
  • Attach screenshots for UI-related tasks
  • Reference existing patterns you want the agent to follow

Start small

Begin with two or three agents:
  1. Learn the review overhead
  2. Get comfortable with the worktree workflow
  3. Scale up once the process feels natural

Monitor resource usage

Each agent runs as a separate process with its own resource footprint. Watch CPU and memory, close completed agents promptly, and use containers if you need hard resource limits.

Example workflow: Building a blog feature

Tasks identified:
  1. Database models for posts and comments
  2. REST API endpoints
  3. Admin UI for managing posts
  4. Public blog page
  5. Tests
Agent assignments:
AgentTaskBranch
ClaudeDatabase modelsblog-db
CodexREST API endpointsblog-api
ClaudeAdmin UIblog-admin-ui
ClaudePublic blog pageblog-public
Review order:
  1. Review and merge blog-db first — it is the foundation
  2. Rebase blog-api onto main, then review and merge
  3. Review blog-admin-ui and blog-public in parallel
  4. Create a test agent after the features are stable

Handling conflicts

Prevention

The best way to avoid conflicts is to assign non-overlapping files to each agent before you launch. Define clear boundaries — for example, backend vs. frontend — and communicate shared interface contracts upfront so each agent knows what to expect from the other.

Resolution

If two agents end up touching the same files:
  1. Merge the first agent’s work into main
  2. Rebase the second agent’s branch:
git checkout blog-api
git rebase main
# Resolve conflicts
  1. Alternatively, use an interactive rebase to cherry-pick only the changes you want.

Using Diff Viewer

Before merging, compare agent branches directly to spot conflicts early:
  1. Open the Diff Viewer
  2. Compare agent-1-branch vs agent-2-branch
  3. Identify and resolve conflicts before either branch lands on main

Resource management

Understanding the resource cost of each agent helps you decide how many to run at once.
ResourceEstimate
Claude Code RAM~200–500 MB per agent
Node.js project (node_modules)+500 MB per worktree
Container overheadAdditional per container
Disk (1 GB project × 5 worktrees)~5 GB working files
Worktrees share git objects but duplicate working files. Clean up completed worktrees promptly to reclaim disk space.
Running multiple agents also multiplies API calls. Rate limits may apply, and costs accumulate faster than with a single agent — keep an eye on your API usage.

Troubleshooting

Too many agents competing for CPU or memory can slow each one down. Try:
  • Closing completed or idle agents
  • Reducing the number of agents running concurrently
  • Using containers with CPU limits to prevent any single agent from monopolizing resources
Agents worked on overlapping files. To recover:
  • Review both sets of changes carefully before resolving
  • For tightly coupled tasks, consider a sequential approach next time rather than running them in parallel
Each agent has isolated context — it does not know what other agents are working on. To keep things coherent:
  • Re-share relevant context when starting a downstream agent
  • Use consistent naming conventions and patterns across all agent prompts
  • Document shared interfaces explicitly in your prompts