by chorus-codes
Multi-LLM peer review for code decisions. Bring your own CLI; Chorus convenes 2-4 other LLMs to review the work before you ship.
# Add to your Claude Code skills
git clone https://github.com/chorus-codes/chorusGuides for using cli tools skills like chorus.
A second opinion (and a third) before you ship AI-written code — using the AI subscriptions you already pay for.
2–3 different AI tools review the same change in parallel, only green-lighting when they agree. Runs on your existing Claude Pro / ChatGPT Plus / Gemini Advanced — typical review costs $0 out of pocket.
One AI writes. Three review. You ship only when they agree — using AI subscriptions you already pay for.
🤖 AI coding tools are confident — and wrong about 5% of the time in subtle ways that are hard to spot until production.
🪞 The model that wrote your code can't see its own blind spots. Asking GPT to review GPT's work is theatre — same training, same biases.
💸 Multi-AI review on raw API keys gets expensive fast. Every diff × 3 reviewers × pay-per-token = real money. So nobody does it routinely.
✅ Different vendors review each other. Claude writes, GPT and Gemini check it. Different blind spots cover each other. Disagreement = red flag you merge.
No comments yet. Be the first to share your thoughts!
Top skills in this category by stars
✅ Uses your existing AI subscriptions. You're already paying for Claude Pro / ChatGPT Plus / Gemini Advanced (~$20/mo each). Chorus drives them headlessly through their CLIs — every multi-AI review costs $0 out of pocket, just counts against the quota you already have. Per-token API users save 10-100× vs running the same prompts directly.
✅ Local-first, zero markup. Your code never reaches a new vendor. Chorus runs on your laptop, talks to the AI tools you already trust, and shuts up. Open source, Apache-2.0.
That's the whole pitch.
🚨 You asked Claude to write a divide(a, b) helper.
It says "looks correct!" You ship. Production crashes at 2am because nobody handled b = 0.
With Chorus: GPT or Gemini would have flagged it in the review pass before you merged.
🔧 You're refactoring a critical path. Your AI rewrote 200 lines and says it's behaviour-equivalent. You're tired and skeptical. Run it through Chorus. Three different AIs all saying "yes, equivalent" lets you sleep.
🏗️ Big architectural call — queue vs polling, sync vs async, this DB vs that one. Write a paragraph, hit Chorus. Three different models give you three angles you hadn't thought of.
📝 Reviewing a 600-line PR. You're short on time. Paste the diff into Chorus. Three reviewers spot the obvious bugs in 90 seconds. Your job becomes the 5% they couldn't catch.
⚔️ Test-driven development where neither AI cheats.
One AI writes tests blind to the code; another AI writes code to pass them. Use the red-green template.
🐛 Hunting a flaky bug. Reproduces 1-in-20, no obvious pattern. Drop the failing test + suspect code into Chorus. Each reviewer attacks the bug from a different angle — race? clock skew? off-by-one? — and you land on the cause faster than walking it alone.
npm i -g chorus-codes # install (no sudo — see below)
chorus init # finds AI tools you already have, wires up MCP
chorus start --ui # opens http://localhost:5050
Paste a task. Hit submit. Watch the AIs argue.
Don't
sudo npm install -gif you use nvm, fnm, asdf, or any per-user Node manager.sudowrites to root's npm prefix (/usr/local/...) but yourchoruscommand resolves through your user prefix — the install succeeds in a place PATH never sees, leaving you on a stale version. Ifnpm install -gerrors with EACCES, set up an unprivileged prefix:npm config set prefix ~/.npm-globalthen add~/.npm-global/binto your PATH.
To upgrade later: chorus update. The updater locates the running binary
and writes to that exact install location, so it always lands where PATH
expects regardless of how you installed.
chorus init registers Chorus as an MCP server with every CLI / IDE it detects (Claude Code, Codex, Gemini CLI, Cursor, Windsurf, Kimi, OpenCode). After that, just ask the assistant in plain English:
> Use chorus to review the staged diff against main
> Ask chorus to run the architect-review template on src/payments/*.ts
> chorus, get a second opinion on this function from claude + gemini
Or invoke a specific MCP tool directly — every CLI uses the same name (chorus) and exposes nine tools:
| Tool | What it does |
|---|---|
| create_chat | Kick off a review (returns a chatId + URL) |
| wait_for_chat | Block until the run reaches a terminal state |
| get_chat_status | Poll a running chat without blocking |
| cancel_chat / resume_chat | Stop or restart |
| list_templates / list_personas | Discover what's available |
| invoke_persona | Run a single persona (skip multi-reviewer fan-out) |
| list_blocked | See chats that need human input |
Example raw invocation (from any MCP client):
// tool: chorus.create_chat
{
"template": "code-review",
"work": "Review the staged diff vs main. Flag race conditions and missing tests."
}
// → { "chatId": "abc123", "url": "http://localhost:5050/runs/abc123", "status": "reviewing" }
Stream results back into your editor, or open the URL to watch live.
Requires Node 20+ and at least one of these (you probably already have one):
npm i -g @anthropic-ai/claude-code # Anthropic — uses Claude Pro sub
npm i -g @openai/codex # OpenAI — uses ChatGPT Plus sub
npm i -g @google/gemini-cli # Google — uses Gemini Advanced sub
Pick whichever vendor you already pay for. Or skip CLIs entirely and add an OpenRouter key in Settings after chorus init.
You ask Claude to write this:
function divide(a, b) {
return a / b;
}
Submit to Chorus with the Code Review template (1 writer + 2 reviewers, both must agree to ship):
| Step | What happens |
|---|---|
| 1. Claude writes | "Looks correct to me!" |
| 2. GPT reviews in parallel | 🚨 No type validation — divide('a','b') returns NaN |
| 3. Gemini reviews in parallel | 🚨 Missing zero-check — divide(1, 0) returns Infinity |
| 4. Verdict | ❌ REJECT — both reviewers flagged real bugs |
Now you know what to fix before you push.
Don't figure out which AIs to use yourself. Pick a pattern that fits the moment:
| Use this when... | Template |
|---|---|
| Pre-merge sanity check | code-review — 1 writer + 2 reviewers, both must agree |
| Diagnosing a weird bug | bug-diagnose — one hypothesises, one challenges |
| Big architectural call | architect-review — 3 different vendors critique your plan |
| TDD where neither AI cheats | red-green — tests written blind to code |
| Quick audit of a diff someone else wrote | review-only — paste, get 3 opinions, no writer |
Make your own by dropping a YAML file in ~/.chorus/templates/. Or duplicate one of the built-ins and tweak.
id: security-pre-merge
label: Security Pre-Merge
description: Sentinel persona on every reviewer; everyone must approve.
slots:
doer:
lineage: anthropic
model: claude-sonnet-4-6
reviewers:
- { lineage: openai, model: codex, persona: sentinel }
- { lineage: google, model: gemini-2.5-pro, persona: sentinel }
- { lineage: opencode, model: opencode-go/kimi-k2.6, persona: sentinel }
quorum:
type: unanimous
Each reviewer can wear a "hat" — a focus area Chorus prepends to their prompt:
| Persona | What they look for | |---|---| | 🛡️ Sentinel | Security holes, auth bypass, injection | | 🗺️ Cartographer | Cross-platform issues (Windows vs Mac, browser support) | | 💰 Accountant | Cost regressions (extra DB queries, API calls) | | ⚡ Profiler | Pe