What if your AI agents remembered everything, learned from every mistake, and got better with every task?
Agent Swarm lets you run a team of AI coding agents that coordinate autonomously. A lead agent receives tasks (from you, Slack, or GitHub), breaks them down, and delegates to worker agents running in Docker containers. Workers execute tasks, report progress, and ship code — all without manual intervention.
Key Features
Lead/Worker coordination — A lead agent delegates and tracks work across multiple workers
Docker isolation — Each worker runs in its own container with a full dev environment
Slack, GitHub, GitLab & Email integration — Create tasks by messaging the bot, @mentioning it in issues/PRs/MRs, or sending an email
Task lifecycle — Priority queues, dependencies, pause/resume across deployments
Compounding memory — Agents learn from every session and get smarter over time
Persistent identity — Each agent has its own personality, expertise, and working style that evolves
Dashboard UI — Real-time monitoring of agents, tasks, and inter-agent chat
Service discovery — Workers can expose HTTP services and discover each other
The agent harness performance optimization system. Skills, instincts, memory, security, and research-first development for Claude Code, Codex, Opencode, Cursor and beyond.
153,083
GitLab integration — Full GitLab webhook support alongside GitHub via provider adapter pattern
Working directory support — Tasks can specify a custom starting directory for agents via the dir parameter
Multi-provider — Run agents with Claude Code, pi-mono, or OpenAI Codex (HARNESS_PROVIDER=claude|pi|codex)
Agent-fs integration — Persistent, searchable filesystem shared across the swarm with auto-registration on first boot
Debug dashboard — SQL query interface with Monaco editor and AG Grid results for database inspection
Workflow engine — DAG-based workflow automation with executor registry, checkpoint durability, webhook/schedule/manual triggers, per-step retry, structured I/O schemas, fan-out/convergence, configurable failure handling, and version history
Linear integration — Bidirectional ticket tracker sync via OAuth + webhooks with AgentSession lifecycle and generic tracker abstraction
Portless local dev — Friendly URLs for local development (api.swarm.localhost:1355) via portless proxy
Onboarding wizard — Interactive CLI wizard (agent-swarm onboard) to set up a new swarm from scratch with presets, credential collection, and docker-compose generation
Skill system — Reusable procedural knowledge: create, install, publish, and sync skills from GitHub with scope resolution (agent → swarm → global)
Human-in-the-Loop — Workflow nodes that pause for human approval or input, with a dashboard UI for reviewing and responding to requests
MCP server management — Register, install, and manage MCP servers for agents with scope cascade (agent → swarm → global) and auto-injection into worker containers
Context usage tracking — Monitor context window utilization and compaction events per task with visual indicators in the dashboard
The fastest way to get a full swarm running — API server, lead agent, and 2 workers.
git clone https://github.com/desplega-ai/agent-swarm.git
cd agent-swarm
# Configure environment
cp .env.docker.example .env
# Edit .env — set API_KEY and CLAUDE_CODE_OAUTH_TOKEN at minimum
# Start everything
docker compose -f docker-compose.example.yml --env-file .env up -d
The API runs on port 3013. The dashboard is available separately (see Dashboard).
The API includes interactive documentation at http://localhost:3013/docs (Scalar UI) and a machine-readable OpenAPI 3.1 spec at http://localhost:3013/openapi.json.
Option B: Local API + Docker Workers
Run the API locally and connect Docker workers to it.
git clone https://github.com/desplega-ai/agent-swarm.git
cd agent-swarm
bun install
# 1. Configure and start the API server
cp .env.example .env
# Edit .env — set API_KEY
bun run start:http
In a new terminal, start a worker:
# 2. Configure and run a Docker worker
cp .env.docker.example .env.docker
# Edit .env.docker — set API_KEY (same as above) and CLAUDE_CODE_OAUTH_TOKEN
bun run docker:build:worker
mkdir -p ./logs ./work/shared ./work/worker-1
bun run docker:run:worker
Option C: Claude Code as Lead Agent
Use Claude Code directly as the lead agent — no Docker required for the lead.
# After starting the API server (Option B, step 1):
bunx @desplega.ai/agent-swarm connect
This configures Claude Code to connect to the swarm. Start Claude Code and tell it:
Register yourself as the lead agent in the agent-swarm.
How It Works
You (Slack / GitHub / Email / CLI)
|
Lead Agent ←→ MCP API Server ←→ SQLite DB
|
┌────┼────┐
Worker Worker Worker
(Docker containers with full dev environments)
You send a task — via Slack DM, GitHub @mention, email, or directly through the API
Lead agent plans — breaks the task down and assigns subtasks to workers
Workers execute — each in an isolated Docker container with git, Node.js, Python, etc.
Progress is tracked — real-time updates in the dashboard, Slack threads, or API
Results are delivered — PRs created, issues closed, Slack replies sent
Agents learn — every session's learnings are extracted and recalled in future tasks
Agents Get Smarter Over Time
Agent Swarm agents aren't stateless. They build compounding knowledge through multiple automatic mechanisms:
Memory System
Every agent has a searchable memory backed by OpenAI embeddings (text-embedding-3-small). Memories are automatically created from:
Session summaries — At the end of each session, a lightweight model extracts key learnings: mistakes made, patterns discovered, failed approaches, and codebase knowledge. These summaries become searchable memories.
Task completions — Every completed (or failed) task's output is indexed. Failed tasks include notes about what went wrong, so the agent avoids repeating the same mistake.
File-based notes — Agents write to /workspace/personal/memory/ in their per-agent directory. Files are automatically indexed and can be promoted to swarm scope.
Lead-to-worker injection — The lead agent can push specific learnings into any worker's memory using the inject-learning tool, closing the feedback loop.
Before starting each task, the runner automatically searches for relevant memories and includes them in the agent's context. Past experience directly informs future work.
Persistent Identity
Each agent has four identity files that persist across sessions and evolve over time:
| File | Purpose | Example |
|------|---------|---------|
| SOUL.md | Core persona, values, behavioral directives | "You're not a chatbot. Be thorough. Own your mistakes." |
| IDENTITY.md | Expertise, working style, track record | "I'm the coding arm of the swarm. I ship fast and clean." |
| TOOLS.md | Environment knowledge — repos, services, APIs | "The API runs on port 3013. Use wts for worktree management." |
| CLAUDE.md | Persistent notes and instructions | Learnings, preferences, important context |
Agents can edit these files directly during a session. Changes are synced to the database in real-time (on every file edit) and at session end. When the agent restarts, its identity is res