Wherever your AI supports skills, VibeSkills works. 340+ governed skills spanning coding, research, data science, automation & creative work.
# Add to your Claude Code skills
git clone https://github.com/foryourhealth111-pixel/Vibe-Skillsvibe is a host-syntax-neutral skill contract.
/vibe, $vibe, and agent-invoked vibe all mean the same thing: enter the same governed runtime, not different entrypoints.
vibe Doesvibe is the official governed runtime for tasks that need:
This runtime is user-facing as one path only.
The user does not choose between M, L, or XL as entry branches.
Those grades still exist, but only as internal execution strategy.
Use vibe when the task is not a trivial one-line edit and you want the system to:
Do not use vibe for:
vibe always runs the same 6-stage state machine:
skeleton_checkdeep_interviewrequirement_docxl_planplan_executephase_cleanupThese stages are mandatory. They may become lighter for simple work, but they are not skipped as a matter of policy.
If your AI supports skills, VibeSkills works. 340+ skills spanning coding, research, data science & creative work.
🧠 Planning · 🛠️ Engineering · 🤖 AI · 🔬 Research · 🎨 Creation
Install → /vibe or $vibe → Smart Routing → M / L / XL Execution → Governance Verification → ✅ Delivery
[!IMPORTANT]
🎯 Core Vision
VibeSkills evolves with the times — ensuring it stays genuinely useful while dramatically lowering the barrier to cutting-edge vibecoding technology, eliminating the cognitive anxiety and steep learning curve that comes with new AI tools.
Whether or not you have a programming background, you can directly harness the most advanced AI capabilities with minimal effort. Productivity gains from AI should be available to everyone.
Traditional skill repos answer: "What tools do I have?" VibeSkills tackles the core pain point of heavy AI users:
No comments yet. Be the first to share your thoughts!
interactive_governedDefault and effective mode.
Use this when the system should still ask the user high-value questions, confirm frozen requirements, and pause at plan approval boundaries.
benchmark_autonomousLegacy compatibility alias only.
If older callers still pass benchmark_autonomous, the runtime silently normalizes it to interactive_governed.
It is not a separate execution plane and it must not create a second unattended control path.
M, L, and XL remain active, but only as internal orchestration grades.
M: narrow execution, single-agent or tightly scoped workL: design or coordination work that needs staged planning and reviewXL: parallelizable or long-running work that benefits from agent teams and wave controlThe governed runtime selects the internal grade after deep_interview and before plan_execute.
User-facing behavior stays the same regardless of host syntax:
Compatibility notes for downstream verification and host adapters:
M=single-agentL grade always follows: design → plan → user approval → subagent execution → two-stage review.spawn_agent/send_input/wait/close_agentskeleton_checkCheck repo shape, active branch, existing plan or requirement artifacts, and runtime prerequisites before starting.
deep_interviewProduce a structured intent contract containing:
In interactive_governed, this stage may ask direct questions.
Legacy benchmark_autonomous input is normalized before this stage runs, so intent capture stays on the same governed mode.
requirement_docFreeze a single requirement document under docs/requirements/.
After this point, execution should trace back to the requirement document rather than to raw chat history.
xl_planWrite the execution plan under docs/plans/.
The plan must contain:
plan_executeExecute the approved plan.
If the work is parallelizable, prefer Codex-native XL orchestration.
If subagents are spawned, their prompts must end with $vibe.
phase_cleanupCleanup is part of the runtime, not an afterthought.
Each phase must leave behind:
The canonical router remains authoritative for route selection.
vibe does not create a second router.
It consumes the canonical route, confirm, unattended, and overlay surfaces and then executes the governed runtime contract around them.
Rules:
confirm_required still uses the existing white-box user_confirm interfaceOther workflow layers may shape discipline, but they must not become a parallel runtime.
Required ownership split:
vibe: governed runtime authorityForbidden outcomes:
Read these protocols on demand:
protocols/runtime.md: governed runtime contract and stage ownershipprotocols/think.md: planning, research, and pre-execution analysisprotocols/do.md: coding, debugging, and verificationprotocols/review.md: review and quality gatesprotocols/team.md: XL multi-agent orchestrationprotocols/retro.md: retrospective and learning captureFor LEARN / retrospective work, use the Context Retro Advisor vocabulary from protocols/retro.md.
CER format artifacts when that protocol is invokedMemory remains runtime-neutral:
state_store (runtime-neutral): default session memoryNever claim success without evidence.
Minimum invariants:
The governed runtime should leave behind:
outputs/runtime/vibe-sessions/<run-id>/skeleton-receipt.jsonoutputs/runtime/vibe-sessions/<run-id>/intent-contract.jsondocs/requirements/YYYY-MM-DD-<topic>.mddocs/plans/YYYY-MM-DD-<topic>-execution-plan.mdoutputs/runtime/vibe-sessions/<run-id>/phase-*.jsonoutputs/runtime/vibe-sessions/<run-id>/cleanup-receipt.jsonscripts/router/resolve-pack-route.ps1core/skill-contracts/v1/vibe.jsonWorks with Claude Code · Codex · Windsurf · OpenCode · Cursor and any AI environment that supports the Skills protocol. Native MCP compatibility.
| ❌ Traditional Pain Points (you've probably felt these) | ✅ VibeSkills Solutions (what we've built) | |:---|:---| | Skills never activate: Hundreds of capabilities in the repo, but AI rarely remembers to use them — activation rate is extremely low. | 🧠 Intelligent Routing: The system automatically routes to the right skill based on context — no need to memorize a skill list. | | Blind execution: AI dives in without clarifying requirements — fast but off-target, projects gradually become black boxes. | 🧭 Governed Workflow: Clarify → Verify → Trace is enforced in a unified process; every step is auditable. | | Conflicting tools: Lack of coordination between plugins and workflows leads to environment pollution or infinite loops. | 🧩 Global Governance: 129 contract rules define safety boundaries and fallback mechanisms for long-term stability. | | Messy workspace: After extended use, repos become cluttered; new Agents miss project details when taking over, causing handoff gaps. | 📁 Semantic Directory Governance: Fixed-architecture file storage so any new AI conversation instantly understands the project context. | | AI bad habits: Deletes main files while clearing backups; writes silent fallbacks then confidently claims "it's done". | 🛡️ Built-in Safety Rules: Batch file deletion is prohibited (one file at a time only); fallback mechanisms must always show explicit warnings. | | Manual workflow discipline: Users must maintain their own AI collaboration process from experience — high learning cost. | 🚦 Framework-guided end-to-end: Requirements → Plan → Multi-agent execution → Automated test iteration — fully managed. | | Skill dispatch chaos in multi-agent runs: Hard to assign the right skills to each agent for different tasks. | 🤖 Automatic Skill Dispatch: Multi-agent workflows automatically assign the corresponding Skills to each Agent's task. |
Which of those pain points hit home? Find your position — what comes next will land harder.
| Audience | Description | |:---:|:---| | 🎯 Users who need reliable delivery | Want AI to be a dependable partner, not a runaway horse | | ⚡ Power users heavily relying on AI/Agents | Need a unified foundation to support large-scale workflows | | 🏢 Small teams with high standardization needs | Want AI workflows to be more maintainable and transferable | | 😩 Practitioners exhausted by skill sprawl | Already tired of tool hunting — just want a ready-to-use solution |
If you're looking for a single small script, this may be overkill. But if you want to use AI more reliably, smoothly, and sustainably — this is your indispensable foundation.
You know this is for you. Next question: 340+ skills in one system — how do they stay out of each other's way?
With 340+ skills, you might wonder: "Won't similar skills conflict? How does the system know which one to use?"
VibeSkills uses a Canonical Router as the single authoritative routing decision center:
graph LR
A[User Task] --> B{Canonical Router}
B --> C[Intent Recognition]
C --> D[Keyword Extraction]
D --> E[Skill Matching]
E --> F[Conflict Detection]
F --> G[Priority Ranking]
G --> H[Routing Decision]
H --> I[Execute Skill]
style B fill:#7B61FF,stroke:#fff,stroke-width:2px,color:#fff
style F fill:#FF9800,stroke:#fff,stroke-width:2px,color:#fff
VibeSkills follows a Clarify ➔ Plan ➔ Execute ➔ Verify governed workflow to ensure every task goes through complete quality control:
speckit-clarify define clear boundaries and acceptance criteriaaios-architect design the implementation pathtdd-guide and code-review ensure delivery quality[!TIP] Built-in CRON support: Explicitly enable it in your request to let vibe continuously advance tasks on a schedule.
I want you to continuously push forward XXX task based on cron, completing: XXXX $vibe
Traditional skill repos let AI "freely choose" — the result:
VibeSkills routing guarantees:
/vibe)After selecting the primary skill, the router also automatically determines the execution level based on task complexity:
| Level | Use Case | Characteristics | |:---:|:---|:---| | M | Narrow-scope work with clear boundaries | Single-agent, token-efficient, fast response | | L | Medium complexity requiring design, planning, and review | Multi-phase, restrained, controllable | | XL | Large tasks — parallelizable, long-running, multi-agent wave execution | Auto-dispatches corresponding Skills, high parallelism |
The system automatically selects the level after requirements clarification, before plan execution. Users only need to invoke
/vibeor$vibe.You can also express an explicit preference:
Please execute this task according to the plan, launchin