Continuously sync your AI setups with one command. Codebase tailor suited agent skills, MCPs and config files for Claude Code, Cursor, and Codex.
# Add to your Claude Code skills
git clone https://github.com/caliber-ai-org/ai-setupnpx @rely-ai/caliber score
Score your AI agent config in 3 seconds. No API key. No changes to your code. Just a score.
Your code stays on your machine. Scoring is 100% local — no LLM calls, no code sent anywhere. Generation uses your own AI subscription (Claude Code, Cursor) or your own API key (Anthropic, OpenAI, Vertex AI). Caliber never sees your code.
Caliber scores, generates, and keeps your AI agent configs in sync with your codebase. It fingerprints your project — languages, frameworks, dependencies, architecture — and produces tailored configs for Claude Code, Cursor, and OpenAI Codex. When your code evolves, Caliber detects the drift and updates your configs to match.
Most repos start with a hand-written CLAUDE.md and nothing else. Here's what Caliber finds — and fixes:
Before After caliber init
────────────────────────────── ──────────────────────────────
Agent Config Score 35 / 100 Agent Config Score 94 / 100
Grade D Grade A
FILES & SETUP 6 / 25 FILES & SETUP 24 / 25
QUALITY 12 / 25 QUALITY 22 / 25
GROUNDING 7 / 20 GROUNDING 19 / 20
ACCURACY 5 / 15 ACCURACY 13 / 15
FRESHNESS 5 / 10 FRESHNESS 10 / 10
BONUS 0 / 5 BONUS 5 / 5
Scoring is deterministic — no LLM, no API calls. It cross-references your config files against your actual project filesystem: do referenced paths exist? Are code blocks present? Is there config drift since your last commit?
caliber score --compare main # See how your branch changed the score
Caliber never overwrites your existing configs without asking. The workflow mirrors code review:
.caliber/backups/ before every writecaliber undo restores everything to its previous stateIf your existing config scores 95+, Caliber skips full regeneration and applies targeted fixes to the specific checks that are failing.
Caliber is not a one-time setup tool. It's a loop:
caliber score
│
▼
┌──── caliber init ◄────────────────┐
│ (generate / fix) │
│ │ │
│ ▼ │
│ your code evolves │
│ (new deps, renamed files, │
│ changed architecture) │
│ │ │
│ ▼ │
└──► caliber refresh ──────────────►┘
(detect drift, update configs)
Auto-refresh hooks run this loop automatically — on every commit or at the end of each AI coding session.
Claude Code
CLAUDE.md — Project context, build/test commands, architecture, conventionsCALIBER_LEARNINGS.md — Patterns learned from your AI coding sessions.claude/skills/*/SKILL.md — Reusable skills (OpenSkills format).mcp.json — Auto-discovered MCP server configurations.claude/settings.json — Permissions and hooksCursor
.cursor/rules/*.mdc — Modern rules with frontmatter (description, globs, alwaysApply).cursor/skills/*/SKILL.md — Skills for Cursor.cursor/mcp.json — MCP server configurationsOpenAI Codex
AGENTS.md — Project context for Codex.agents/skills/*/SKILL.md — Skills for CodexTypeScript, Python, Go, Rust, Java, Ruby, Terraform, and more. Language and framework detection is fully LLM-driven — no hardcoded mappings. Caliber works on any project.
Target a single platform or all three at once:
caliber init --agent claude # Claude Code only
caliber init --agent cursor # Cursor only
caliber init --agent codex # Codex only
caliber init --agent all # All three
caliber init --agent claude,cursor # Comma-separated
Not happy with the generated output? During review, refine via natural language — describe what you want changed and Caliber iterates until you're satisfied.
Caliber detects the tools your project uses (databases, APIs, services) and auto-configures matching MCP servers for Claude Code and Cursor.
caliber score evaluates your config quality without any LLM calls — purely by cross-referencing config files against your actual project filesystem.
| Category | Points | What it checks | |---|---|---| | Files & Setup | 25 | Config files exist, skills present, MCP servers, cross-platform parity | | Quality | 25 | Code blocks, concise token budget, concrete instructions, structured headings | | Grounding | 20 | Config references actual project directories and files | | Accuracy | 15 | Referenced paths exist on disk, config freshness vs. git history | | Freshness & Safety | 10 | Recently updated, no leaked secrets, permissions configured | | Bonus | 5 | Auto-refresh hooks, AGENTS.md, OpenSkills format |
Every failing check includes structured fix data — when caliber init runs, the LLM receives exactly what's wrong and how to fix it.
Caliber watches your AI coding sessions and learns from them. Hooks capture tool usage, failures, and your corrections — then an LLM distills operational patterns into CALIBER_LEARNINGS.md.
caliber learn install # Install hooks for Claude Code and Cursor
caliber learn status # View hook status, event count, and ROI summary
caliber learn finalize # Manually trigger analysis (auto-runs on session end)
caliber learn remove # Remove hooks
Learned items are categorized by type — [correction], [gotcha], [fix], [pattern], [env], [convention] — and automatically deduplicated.
Keep configs in sync with your codebase automatically:
| Hook | Trigger | What it does |
|---|---|---|
| Git pre-commit | Before each commit | Refreshes docs and stages updated files |
| Claude Code session end | End of each session | Runs caliber refresh and updates docs |
| Learning hooks | During each session | Captures events for session learning |
caliber hooks --install # Enable refresh hooks
caliber hooks --remove # Disable refresh hooks
The refresh command analyzes your git diff (committed, staged, and unstaged changes) and updates config files to reflect what changed.
.caliber/backups/ before every writecaliber undo restores everything to its previous state--dry-run before applying| Command | Description |
|---|---|
| caliber score | Score config quality (deterministic, no LLM) |
| caliber score --compare <ref> | Compare current score against a git ref |
| caliber init | Full setup wizard — analyze, generate, review, install hooks |
| caliber regenerate | Re-analyze and regenerate configs (aliases: regen, re) |
| caliber refresh | Update docs based on recent code changes |
| caliber skills | Discover and install community skills |
| caliber learn | Session learning — install hooks, view status, finalize analysis |
| caliber hooks | Manage auto-refresh hooks |
| caliber config | Configure LLM provider, API key, and model |
| caliber status | Show current setup status |
| caliber undo | Revert all changes made by Caliber |
No. Caliber shows you a diff of every proposed change. You accept, refine, or decline each one. Originals are backed up automatically.
Scoring: No. caliber score runs 100% locally with no LLM.
Generation: Uses your existing Claude Code or Cursor subscription (no API key needed), or bring your own key for Anthropic, OpenAI, or Vertex AI.
No comments yet. Be the first to share your thoughts!