Continuously sync your AI setups with one command. Codebase tailor suited agent skills, MCPs and config files for Claude Code, Cursor, and Codex.
# Add to your Claude Code skills
git clone https://github.com/caliber-ai-org/ai-setupGuides for using ai agents skills like ai-setup.
Last scanned: 5/3/2026
{
"issues": [
{
"type": "npm-audit",
"message": "gaxios: Vulnerability found",
"severity": "medium"
},
{
"type": "npm-audit",
"message": "postcss: PostCSS has XSS via Unescaped </style> in its CSS Stringify Output",
"severity": "medium"
},
{
"type": "npm-audit",
"message": "uuid: uuid: Missing buffer bounds check in v3/v5/v6 when buf is provided",
"severity": "medium"
},
{
"type": "npm-audit",
"message": "vite: Vite Vulnerable to Path Traversal in Optimized Deps `.map` Handling",
"severity": "high"
}
],
"status": "WARNING",
"scannedAt": "2026-05-03T06:26:13.659Z",
"semgrepRan": false,
"npmAuditRan": true,
"pipAuditRan": true
}Hand-written CLAUDE.md files go stale the moment you refactor. Your AI agent hallucinates paths that no longer exist, misses new dependencies, and gives advice based on yesterday's architecture. Caliber generates and maintains your AI context files (CLAUDE.md, .cursor/rules/, AGENTS.md, copilot-instructions.md) so they stay accurate as your code evolves — and keeps every agent on your team in sync, whether they use Claude Code, Cursor, Codex, OpenCode, or GitHub Copilot.
Most repos start with a hand-written CLAUDE.md and nothing else. Here's what Caliber finds — and fixes:
Before After /setup-caliber
────────────────────────────── ──────────────────────────────
Agent Config Score 35 / 100 Agent Config Score 94 / 100
Grade D Grade A
FILES & SETUP 6 / 25 FILES & SETUP 24 / 25
QUALITY 12 / 25 QUALITY 22 / 25
GROUNDING 7 / 20 GROUNDING 19 / 20
ACCURACY 5 / 15 ACCURACY 13 / 15
FRESHNESS 5 / 10 FRESHNESS 10 / 10
BONUS 0 / 5 BONUS 5 / 5
Scoring is deterministic — no LLM, no API calls. It cross-references your config files against your actual project filesystem: do referenced paths exist? Are code blocks present? Is there config drift since your last commit?
caliber score --compare main # See how your branch changed the score
Requires Node.js >= 20.
npx @rely-ai/caliber bootstrap
Then, in your terminal (not the IDE chat), start a Claude Code or Cursor CLI session and type:
/setup-caliber
Your agent detects your stack, generates tailored configs for every platform your team uses, sets up pre-commit hooks, and enables continuous sync — all from inside your normal workflow.
Don't use Claude Code or Cursor? Run caliber init instead — it's the same setup as a CLI wizard. Works with any LLM provider: bring your own Anthropic, OpenAI, or Vertex AI key.
Your code stays on your machine. Bootstrap is 100% local — no LLM calls, no code sent anywhere. Generation uses your own AI subscription or API key. Caliber never sees your code.
Caliber works on Windows with a few notes:
cd into your project folder, then run npx @rely-ai/caliber bootstrap.curl | bash command shown on macOS/Linux. Then run agent login in your terminal to authenticate.Caliber never overwrites your existing configs without asking. The workflow mirrors code review:
.caliber/backups/ before every writecaliber undo restores everything to its previous stateIf your existing config scores 95+, Caliber skips full regeneration and applies targeted fixes to the specific checks that are failing.
Bootstrap gives your agent the /setup-caliber skill. Your agent analyzes your project — languages, frameworks, dependencies, architecture — generates configs, and installs hooks. From there, it's a loop:
npx @rely-ai/caliber bootstrap ← one-time, 2 seconds
│
▼
agent runs /setup-caliber ← agent handles everything
│
▼
┌──── configs generated ◄────────────┐
│ │ │
│ ▼ │
│ your code evolves │
│ (new deps, renamed files, │
│ changed architecture) │
│ │ │
│ ▼ │
└──► caliber refresh ──────────────►─┘
(auto, on every commit)
Pre-commit hooks run the refresh loop automatically. New team members get nudged to bootstrap on their first session.
Claude Code
CLAUDE.md — Project context, build/test commands, architecture, conventionsCALIBER_LEARNINGS.md — Patterns learned from your AI coding sessions.claude/skills/*/SKILL.md — Reusable skills (OpenSkills format).mcp.json — Auto-discovered MCP server configurations.claude/settings.json — Permissions and hooksCursor
.cursor/rules/*.mdc — Modern rules with frontmatter (description, globs, alwaysApply).cursor/skills/*/SKILL.md — Skills for Cursor.cursor/mcp.json — MCP server configurationsOpenAI Codex
AGENTS.md — Project context for Codex.agents/skills/*/SKILL.md — Skills for CodexOpenCode
AGENTS.md — Project context (shared with Codex when both are targeted).opencode/skills/*/SKILL.md — Skills for OpenCodeGitHub Copilot
.github/copilot-instructions.md — Project context for CopilotTypeScript, Python, Go, Rust, Java, Ruby, Terraform, and more. Language and framework detection is fully LLM-driven — no hardcoded mappings. Caliber works on any project.
caliber bootstrap auto-detects which agents you have installed. For manual control:
caliber init --agent claude # Claude Code only
caliber init --agent cursor # Cursor only
caliber init --agent codex # Codex only
caliber init --agent opencode # OpenCode only
caliber init --agent github-copilot # GitHub Copilot only
caliber init --agent all # All platforms
caliber init --agent claude,cursor # Comma-separated
Not happy with the generated output? During review, refine via natural language — describe what you want changed and Caliber iterates until you're satisfied.
Caliber detects the tools your project uses (databases, APIs, services) and auto-configures matching MCP servers for Claude Code and Cursor.
caliber score evaluates your config quality without any LLM calls — purely by cross-referencing config files against your actual project filesystem.
| Category | Points | What it checks | |---|---|---| | Files & Setup | 25 | Config files exist, skills present, MCP servers, cross-platform parity | | Quality | 25 | Code blocks, concise token budget, concrete instructions, structured headings | | Grounding | 20 | Config references actual project directories and files | | Accuracy | 15 | Referenced paths exist on disk, config freshness vs. git history | | Freshness & Safety | 10 | Recently updated, no leaked secrets, permissions configured | | Bonus | 5 | Auto-refresh hooks, AGENTS.md, OpenSkills format |
Every failing check includes structured fix data — when caliber init runs, the LLM receives exactly what's wrong and how to fix it.
Caliber watches your AI coding sessions and learns from them. Hooks capture tool usage, failures, and your corrections — then an LLM distills operational patterns into CALIBER_LEARNINGS.md.
caliber learn install # Install hooks for Claude Code and Cursor
caliber learn status # View hook status, event count, and ROI summary
caliber learn finalize # Manually trigger analysis (auto-runs on session end)
caliber learn remove # Remove hooks
Learned items are categorized by type — [correction], [gotcha], [fix], [pattern], [env], [convention] — and automatically deduplicated.
Keep configs in sync with your codebase automatically:
| Hook | Trigger | What it does | |---|---|---| | Git pre-commit | Before each commit | Refreshes docs and stages updated files | |
No comments yet. Be the first to share your thoughts!