by unohee
OpenSwarm — Autonomous AI dev team orchestrator powered by Claude Code CLI. Discord control, Linear integration, cognitive memory.
# Add to your Claude Code skills
git clone https://github.com/unohee/OpenSwarmLast scanned: 5/8/2026
{
"issues": [],
"status": "PASSED",
"scannedAt": "2026-05-08T05:57:33.455Z",
"semgrepRan": false,
"npmAuditRan": true,
"pipAuditRan": true
}Autonomous AI agent orchestrator — Claude, GPT, Codex, and local models (Ollama/LMStudio/llama.cpp)
OpenSwarm is developed and maintained in my spare time by a single author. If the project saves you time or money, please consider chipping in — it directly funds ongoing updates, bug fixes, and new adapters.
One-off contributions are perfectly fine — there is no subscription tier and no feature is paywalled. Thank you.
OpenSwarm orchestrates multiple AI agents as autonomous code workers. It picks up Linear issues, runs Worker/Reviewer pair pipelines, reports to Discord, and retains long-term memory via LanceDB. Supports Claude Code, OpenAI GPT, Codex, and local open-source models via Ollama, LMStudio, or llama.cpp.
npm install -g @intrect/openswarm
openswarm
That's it. openswarm with no arguments launches the TUI chat interface immediately.

| Key | Action | |-----|--------| | | Switch tabs (Chat / Projects / Tasks / Stuck / Logs) | | | Send message | | | Newline | | | Focus input | | | Exit input focus | | | Quit |
No comments yet. Be the first to share your thoughts!
TabEnterShift+EnteriEscCtrl+CStatus bar shows: provider · model · message count · cumulative cost
openswarm # TUI chat (default)
openswarm chat [session] # Simple readline chat
openswarm start # Start full daemon (requires config.yaml)
openswarm run "Fix the bug" -p ~/my-project # Run a single task
openswarm exec "Run tests" --local --pipeline # Execute via daemon
openswarm init # Generate config.yaml scaffold
openswarm validate # Validate config.yaml
# Code Registry & BS Detector
openswarm check --scan # Scan repo → register all entities
openswarm check src/foo.ts # File brief (entities, tests, risk)
openswarm check --bs # BS pattern scan (bad code smells)
openswarm check --stats # Registry statistics
openswarm check --high-risk # High-risk entities
openswarm check --search "name" # Full-text search
openswarm annotate "funcName" --deprecate "reason"
openswarm annotate "funcName" --tag "needs-refactor"
openswarm annotate "funcName" --warn "error/security: SQL injection"
openswarm exec options| Option | Description |
|--------|-------------|
| --path <path> | Project path (default: cwd) |
| --timeout <seconds> | Timeout in seconds (default: 600) |
| --local | Execute locally without daemon |
| --pipeline | Full pipeline: worker + reviewer + tester + documenter |
| --worker-only | Worker only, no review |
| -m, --model <model> | Model override for worker |
Exit codes: 0 success · 1 failure · 2 timeout
For autonomous operation (Linear issue processing, Discord control, PR auto-improvement), you need a full config:
claude -p) — default providercodex exec) — optional alternative providergh) for CI monitoring (optional)git clone https://github.com/unohee/OpenSwarm.git
cd OpenSwarm
npm install
cp config.example.yaml config.yaml
Create a .env file:
DISCORD_TOKEN=your-discord-bot-token
DISCORD_CHANNEL_ID=your-channel-id
LINEAR_API_KEY=your-linear-api-key
LINEAR_TEAM_ID=your-linear-team-id
config.yaml supports ${VAR} / ${VAR:-default} substitution and is validated with Zod schemas.
| Section | Description |
|---------|-------------|
| discord | Bot token, channel ID, webhook URL |
| linear | API key, team ID |
| github | Repos list for CI monitoring |
| agents | Agent definitions (name, projectPath, heartbeat interval) |
| autonomous | Schedule, pair mode, role models, decomposition settings |
| prProcessor | PR auto-improvement schedule, retry limits, conflict resolver config |
adapter: claude # "claude" | "codex" | "gpt" | "local"
Switch at runtime via Discord: !provider codex / !provider claude
| Adapter | Backend | Models | Auth |
|---------|---------|--------|------|
| claude | Claude Code CLI | sonnet-4, haiku-4.5, opus-4 | CLI auth |
| codex | OpenAI Codex CLI | o3, o4-mini | CLI auth |
| gpt | OpenAI API | gpt-4o, o3, gpt-4.1 | OAuth PKCE |
| local | Ollama / LMStudio / llama.cpp | gemma4, llama3, mistral, qwen, etc. | None |
Local models are auto-detected on standard ports (Ollama :11434, LMStudio :1234, llama.cpp :8080).
Per-role adapter overrides:
autonomous:
defaultRoles:
worker:
adapter: codex
model: o4-mini
reviewer:
adapter: claude
model: claude-sonnet-4-20250514
autonomous:
defaultRoles:
worker:
model: claude-haiku-4-5-20251001
escalateModel: claude-sonnet-4-20250514
escalateAfterIteration: 3
timeoutMs: 1800000
reviewer:
model: claude-haiku-4-5-20251001
timeoutMs: 600000
tester:
enabled: false
documenter:
enabled: false
auditor:
enabled: false
npm run service:install # Build and install as system service
npm run service:start # Start
npm run service:stop # Stop
npm run service:restart # Restart
npm run service:status # Status and recent logs
npm run service:logs # stdout (follow mode)
npm run service:errors # stderr (follow mode)
npm run service:uninstall # Uninstall
npm run build && npm start # Production
npm run dev # Development (tsx watch)
docker compose up -d # Docker
┌──────────────────────────┐
│ Linear API │
│ (issues, state, memory) │
└─────────────┬────────────┘
│
┌─────────────────────┼─────────────────────┐
│ │ │
v v v
┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ AutonomousRunner │ │ DecisionEngine │ │ TaskScheduler │
│ (heartbeat loop) │─>│ (scope guard) │─>│ (queue + slots) │
└────────┬─────────┘ └──────────────────┘ └────────┬─────────┘
│ │
v v
┌──────────────────────────────────────────────────────────────┐
│ PairPipeline │
│ ┌────────┐ ┌──────────┐ ┌────────┐ ┌─────────────┐ │
│ │ Worker │──>│ Reviewer │──>│ Tester │──>│ Documenter │ │
│ │(Adapter│<──│(Adapter) │ │(Adapter│ │ (Adapter) │ │
│ └───┬────┘ └──────────┘ └────────┘ └─────────────┘ │
│ │ ↕ StuckDetector │
│ ┌───┴────────────────────────────────────────────────────┐ │
│ │ Adapters: Claude | Codex | GPT | Local (Ollama/LMS) │ │
│ └────────────────────────────────────────────────────────┘ │
└──────────────────────────────────────────────────────────────┘
│ │ │
v v v
┌──────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ Discord Bot │ │ Memory (LanceDB │ │ Knowledge Graph │
│ (commands) │ │ + Xenova E5) │ │ (code analysis) │
└──────────────┘ └──────────────────┘ └────────┬─────────┘
│
┌────────┴─────────┐
│ Code Registry │
│ (SQLite + FTS5) │
│ + BS Detector │
└──────────────────┘
as any, etc.) with pipeline guard integration