by spencermarx
AI-powered multi-agent code review. Simulates a customizable team of Engineers performing code review with built-in discourse.
# Add to your Claude Code skills
git clone https://github.com/spencermarx/open-code-reviewPrerequisites: Node.js >= 20, Git, and an AI coding assistant.
# 1. Install the CLI globally
npm install -g @open-code-review/cli
# 2. Initialize OCR in your project
cd your-project
ocr init
# 3. Launch the dashboard and run your first review
ocr dashboard
Your browser will open the OCR dashboard — you're ready to run your first review.
ocr init detects your installed AI tools (Claude Code, Cursor, Windsurf, and 11 more) and configures each one automatically. Then open the dashboard to run a review, browse results, and manage your workflow from the browser.
Or run reviews directly from your AI coding assistant:
/ocr:review # Claude Code / Cursor
/ocr-review # Windsurf / other tools
/ocr-review Review against openspec/spec.md # With requirements
When you ask an AI to "review my code," you get a single perspective — one pass, one set of priorities. OCR changes that fundamentally:
No comments yet. Be the first to share your thoughts!
CLAUDE.md, .cursorrules, OpenSpec configs, and other common patterns. Reviewers apply your conventions. ┌─────────────┐
│ Tech Lead │ ← Orchestrates the review
└──────┬──────┘
│
┌─────────────────┼─────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────┐ ┌─────────────────┐
│ Your Team │ │ Your Team │ │ Your Team │
│ Composition │ │ Composition│ │ Composition │
└─────────────────┘ └─────────────┘ └─────────────────┘
│ │ │
└─────────────────┼─────────────────┘
│
┌──────▼──────┐
│ Discourse │ ← Reviewers debate findings
└──────┬──────┘
│
┌──────▼──────┐
│ Synthesis │ ← Unified, prioritized feedback
└─────────────┘
Note: OCR does not replace human code review. The goal is to reduce the burden on human reviewers by catching issues earlier — so human review is faster and more focused on things machines can't catch.
The dashboard is the recommended way to run reviews, browse results, and manage your workflow. Launch it with ocr dashboard.
The Command Center lets you launch multi-agent code reviews and Code Review Maps directly from the dashboard. Specify targets, add requirements, toggle fresh starts — then watch live terminal output as agents work.
View verdict banners, individual reviewer cards, findings tables, and cross-reviewer discourse for every review round. Set triage status on findings (needs review, in progress, changes made, acknowledged, dismissed) with filtering and sorting.
Navigate large changesets with section-based breakdowns, rendered Mermaid dependency graphs, and file-level progress tracking.
Two posting modes from the review round page:
The Team page lets you browse all 28 reviewer personas grouped by tier (Generalists, Specialists, Famous Engineers, Custom). View full prompts, focus areas, and persona details. Create new reviewers or sync metadata — all from the dashboard.
After reviewing findings, address them directly. Copy a portable AI prompt into any coding tool, or — with Claude Code or OpenCode detected — run an agent directly from the dashboard to corroborate, validate, and implement changes.
AI-powered chat on every review round and map run page. Ask follow-up questions about specific findings, request clarification on reviewer reasoning, or explore alternative approaches.
The dashboard reads from the same .ocr/ directory and SQLite database used by the review workflow. No separate configuration is needed.
Run all OCR commands directly from your AI coding assistant using slash commands.
git add .
Then in your AI assistant:
/ocr-review # Windsurf / flat-prefix tools
/ocr:review # Claude Code / Cursor
/ocr-review Review against openspec/spec.md # With requirements context
In a separate terminal:
ocr progress
Open Code Review
2026-01-26-main · Round 2 · 1m 23s
━━━━━━━━━━━━━━━━──────── 60% · Parallel Reviews
✓ Context Discovery
✓ Change Context
✓ Tech Lead Analysis
▸ Parallel Reviews
Round 2
✓ Principal #1 2 │ ✓ Principal #2 1 │ ○ Quality #1 0 │ ○ Quality #2 0
Round 1 ✓ 4 reviewers
· Aggregate Findings
· Reviewer Discourse
· Final Synthesis
Ctrl+C to exit
/ocr-map # Generate a Code Review Map for large changesets
/ocr-post # Post review to GitHub PR
/ocr-doctor # Verify installation
/ocr-reviewers # List available reviewer personas
/ocr-history # List past review sessions
/ocr-show [session] # Display a specific past review
For Claude Code / Cursor, use /ocr:review, /ocr:map, /ocr:post, etc.
OCR follows an 8-phase workflow orchestrated by a Tech Lead agent:
| Phase | What happens |
|-------|--------------|
| 1. Context Discovery | Load config, discover project standards, read OpenSpec context |
| 2. Change Analysis | Analyze git diff, understand what changed and why |
| 3. Tech Lead Assessment | Summarize changes, identify risks, select reviewer team |
| 4. Parallel Reviews | Each reviewer examines code independently based on your team config |
| 5. Aggregation | Merge findings from redundant reviewers |
| 6. Discourse | Reviewers challenge, validate, and connect findings |
| 7. Synthesis | Produce prioritized, deduplicated final review |
| 8. Presentation | Display results; optionally post to GitHub |
For large changesets (20+ files), Code Review Maps provide a structured navigation document — grouping related changes into sections, identifying key files, and surfacing dependencies with Mermaid diagrams.
/oc