With traditional AI coding, you wait for one agent to finish before starting the next — your time is blocked while the agent works. Multi-agent programming breaks that bottleneck. You launch several agents at once, each tackling a separate task in its own isolated environment, and you review the results when they are ready. Agentastic makes this possible through git worktrees and containers, giving every agent a clean, conflict-free workspace.Documentation Index
Fetch the complete documentation index at: https://docs.agentastic.dev/llms.txt
Use this file to discover all available pages before exploring further.
Why run multiple agents?
Traditional serial workflow:Setting up a multi-agent workflow
Plan your tasks
Break your work into independent pieces. Each agent needs a clearly scoped task that does not depend on another agent finishing first.
| Task | Agent | Branch |
|---|---|---|
| User authentication | Claude | feature-auth |
| API endpoints | Codex | feature-api |
| Database schema | Claude | feature-db |
| Unit tests | Aider | feature-tests |
Launch agents from Agent Home
- Open Agent Home
- Write your first task prompt
- Select the agent and instance count
- Click Send
- Select multiple agents in the picker
- Set instance counts (for example, Claude ×2, Codex ×1)
- Each instance gets its own worktree automatically
Monitor progress
Track all running agents in the Agents navigator tab:
- See all active worktrees and their status
- Check terminal output for each agent
- Switch between agents with
Cmd+Option+Down/Cmd+Option+Up
Parallel agent strategies
Strategy 1: Feature decomposition
Strategy 1: Feature decomposition
Split a large feature into independent parts and assign each to a different agent. This is the most common multi-agent pattern.Each agent works on a distinct layer of the stack, so there is minimal risk of overlapping file changes.
Strategy 2: Same task, different approaches
Strategy 2: Same task, different approaches
Launch two agents with the same goal but different instructions to get competing implementations. Compare the results and pick the best one.This is useful when you are unsure which approach will suit your architecture.
Strategy 3: Iterative refinement
Strategy 3: Iterative refinement
Chain agent outputs so that each agent builds on the previous one’s work.Unlike a purely parallel workflow, this strategy introduces intentional sequencing at the review steps — you control when each stage begins.
Strategy 4: Code review pipeline
Strategy 4: Code review pipeline
Use agents to review each other’s work before you do your final pass.This gives you a structured quality gate without having to review every line yourself first.
Best practices
Keep tasks independent
Good task splits avoid dependencies between agents:- Auth system (independent)
- Payment processing (independent)
- Email notifications (independent)
- Create user model (other tasks depend on this)
- Add user validation (depends on the model)
- Build user API (depends on both)
Use clear branch names
Descriptive branch names make it easier to track what each agent is working on:Give each agent proper context
- Use
@mentions to reference relevant files in your prompt - Attach screenshots for UI-related tasks
- Reference existing patterns you want the agent to follow
Start small
Begin with two or three agents:- Learn the review overhead
- Get comfortable with the worktree workflow
- Scale up once the process feels natural
Monitor resource usage
Each agent runs as a separate process with its own resource footprint. Watch CPU and memory, close completed agents promptly, and use containers if you need hard resource limits.Example workflow: Building a blog feature
Tasks identified:- Database models for posts and comments
- REST API endpoints
- Admin UI for managing posts
- Public blog page
- Tests
| Agent | Task | Branch |
|---|---|---|
| Claude | Database models | blog-db |
| Codex | REST API endpoints | blog-api |
| Claude | Admin UI | blog-admin-ui |
| Claude | Public blog page | blog-public |
- Review and merge
blog-dbfirst — it is the foundation - Rebase
blog-apionto main, then review and merge - Review
blog-admin-uiandblog-publicin parallel - Create a test agent after the features are stable
Handling conflicts
Prevention
The best way to avoid conflicts is to assign non-overlapping files to each agent before you launch. Define clear boundaries — for example, backend vs. frontend — and communicate shared interface contracts upfront so each agent knows what to expect from the other.Resolution
If two agents end up touching the same files:- Merge the first agent’s work into main
- Rebase the second agent’s branch:
- Alternatively, use an interactive rebase to cherry-pick only the changes you want.
Using Diff Viewer
Before merging, compare agent branches directly to spot conflicts early:- Open the Diff Viewer
- Compare
agent-1-branchvsagent-2-branch - Identify and resolve conflicts before either branch lands on main
Resource management
Understanding the resource cost of each agent helps you decide how many to run at once.| Resource | Estimate |
|---|---|
| Claude Code RAM | ~200–500 MB per agent |
| Node.js project (node_modules) | +500 MB per worktree |
| Container overhead | Additional per container |
| Disk (1 GB project × 5 worktrees) | ~5 GB working files |
Worktrees share git objects but duplicate working files. Clean up completed worktrees promptly to reclaim disk space.
Troubleshooting
Agent is running slowly
Agent is running slowly
Too many agents competing for CPU or memory can slow each one down. Try:
- Closing completed or idle agents
- Reducing the number of agents running concurrently
- Using containers with CPU limits to prevent any single agent from monopolizing resources
Merge conflicts
Merge conflicts
Agents worked on overlapping files. To recover:
- Review both sets of changes carefully before resolving
- For tightly coupled tasks, consider a sequential approach next time rather than running them in parallel
Context lost between agents
Context lost between agents
Each agent has isolated context — it does not know what other agents are working on. To keep things coherent:
- Re-share relevant context when starting a downstream agent
- Use consistent naming conventions and patterns across all agent prompts
- Document shared interfaces explicitly in your prompts