cagent
cagent is an open source tool for building teams of specialized AI agents. Instead of prompting one generalist model, you define agents with specific roles and instructions that collaborate to solve problems. Run these agent teams from your terminal using any LLM provider.
Why agent teams
One agent handling complex work means constant context-switching. Split the work across focused agents instead - each handles what it's best at. cagent manages the coordination.
Here's a two-agent team that debugs problems:
agents:
root:
model: openai/gpt-5-mini # Change to the model that you want to use
description: Bug investigator
instruction: |
Analyze error messages, stack traces, and code to find bug root causes.
Explain what's wrong and why it's happening.
Delegate fix implementation to the fixer agent.
sub_agents: [fixer]
toolsets:
- type: filesystem
- type: mcp
ref: docker:duckduckgo
fixer:
model: anthropic/claude-sonnet-4-5 # Change to the model that you want to use
description: Fix implementer
instruction: |
Write fixes for bugs diagnosed by the investigator.
Make minimal, targeted changes and add tests to prevent regression.
toolsets:
- type: filesystem
- type: shellThe root agent investigates and explains the problem. When it understands the
issue, it hands off to fixer for implementation. Each agent stays focused on
its specialty.
Installation
cagent is included in Docker Desktop 4.49 and later.
For Docker Engine users or custom installations:
- Homebrew:
brew install cagent - Winget:
winget install Docker.Cagent - Pre-built binaries: GitHub releases
- From source: See the cagent repository
Get started
Try the bug analyzer team:
Set your API key for the model provider you want to use:
$ export ANTHROPIC_API_KEY=<your_key> # For Claude models $ export OPENAI_API_KEY=<your_key> # For OpenAI models $ export GOOGLE_API_KEY=<your_key> # For Gemini modelsSave the example configuration as
debugger.yaml.Run your agent team:
$ cagent run debugger.yaml
You'll see a prompt where you can describe bugs or paste error messages. The investigator analyzes the problem, then hands off to the fixer for implementation.
How it works
You interact with the root agent, which can delegate work to sub-agents you define. Each agent:
- Uses its own model and parameters
- Has its own context (agents don't share knowledge)
- Can access built-in tools like todo lists, memory, and task delegation
- Can use external tools via MCP servers
The root agent delegates tasks to agents listed under sub_agents. Sub-agents
can have their own sub-agents for deeper hierarchies.
Configuration options
Agent configurations are YAML files. A basic structure looks like this:
agents:
root:
model: claude-sonnet-4-0
description: Brief role summary
instruction: |
Detailed instructions for this agent...
sub_agents: [helper]
helper:
model: gpt-5-mini
description: Specialist agent role
instruction: |
Instructions for the helper agent...You can also configure model settings (like context limits), tools (including MCP servers), and more. See the configuration reference for complete details.
Share agent teams
Agent configurations are packaged as OCI artifacts. Push and pull them like container images:
$ cagent push ./debugger.yaml myusername/debugger
$ cagent pull myusername/debugger
Use Docker Hub or any OCI-compatible registry. Pushing creates the repository if it doesn't exist yet.
What's next
- Follow the tutorial to build your first coding agent
- Learn best practices for building effective agents
- Integrate cagent with your editor or use agents as tools in MCP clients
- Browse example agent configurations in the cagent repository
- Use
cagent newto generate agent teams with AI - Connect agents to external tools via the Docker MCP Gateway
- Read the full configuration reference