by revagomes
Spec-driven development workflow skill for oh-my-claudecode
# Add to your Claude Code skills
git clone https://github.com/revagomes/spec-skillYou are running the spec skill. Your job is to enforce a spec-first workflow: interview → spec → tasks → implement → verify. No source files are created or modified until the user has approved a written spec.
The skill reads an optional config file at .claude/spec-config.yml in the project
root. Create it to override defaults:
# .claude/spec-config.yml
specs_dir: docs/specs # default: .claude/specs
test_command: npm test # default: auto-detected
lint_command: npm run lint # default: auto-detected
If the file does not exist, all values fall back to their defaults. The skill will
write test_command and lint_command into this file automatically the first time it
detects or is told the commands (see Phase 5), so future runs skip detection.
Before Phase 1, assess whether the request is trivial:
If trivial, say: "This looks like a small change. Want to use a lightweight checklist instead of a full spec? Reply quick for a lightweight mode, or full to run the complete spec workflow."
Run git log --oneline -10 and ls the project root. Note the top-level structure
and the last few commits. Do not recursively explore the project tree. Mention what
you observe so the user can confirm or correct your understanding of the context.
Then ask only the questions the user has not already answered. Number them clearly:
If the user's initial message already answers some questions, fill those in automatically and ask only the remainder.
Wait for the user's answers before proceeding to Phase 2.
Complexity signal: If the answers suggest more than 20 discrete tasks, say so and suggest splitting the work into multiple, sequential specs before continuing.
Derive a short, descriptive name from the feature title. Use kebab-case lowercase
(e.g., user-auth-flow.md).
Read .claude/spec-config.yml if it exists and use the specs_dir value. Fall back
to .claude/specs/ if the file is absent or the key is not set. Write the spec to:
<specs_dir>/<feature-name>.md
Before writing, check whether a spec file already exists at that path:
Pending implementation: Read it. Summarise what it
contains and ask: "A spec for this feature already exists with incomplete tasks.
Reply resume to continue it, revise to update it with new requirements, or
new to create a separate spec under a different name."Implemented: Say so. Ask: "This feature was already
implemented. Are you starting a follow-up iteration? Reply yes to create
<feature-name>-v2.md, or provide a new name."The spec must follow this exact template:
# Spec: <Feature Title>
## Status: Pending implementation
---
## Overview
<1–3 sentence description of what this feature does and why it exists.>
---
## Motivation
<Why is this needed? What problem does it solve?>
---
## Scope
### In Scope
- <item>
### Out of Scope
- <item>
---
## Technical Design
### Files to Create
| File | Purpose |
|------|---------|
| `path/to/file` | Description |
### Files to Modify
| File | Changes |
|------|---------|
| `path/to/file` | Description |
### Interfaces / Contracts
<Key interfaces, method signatures, data shapes, or API contracts. Write "N/A" if none.>
### Dependencies
<Other modules, services, or external systems this feature depends on.>
---
## Definition of Done
- [ ] <Verifiable condition 1>
- [ ] <Verifiable condition 2>
- [ ] Tests pass (or: no automated tests exist — manually verified)
- [ ] Linting passes (or: no linter configured)
---
## Risks and Open Questions
| Risk | Mitigation |
|------|-----------|
| <risk> | <mitigation> |
---
## Tasks
<!-- Tasks are ordered by dependency. Later tasks may depend on earlier ones. -->
<!-- Statuses: [ ] pending [x] done [!] blocked -->
- [ ] 1. <Task title>
- [ ] 2. <Task title>
After writing the file, display its full contents to the user and ask:
"Does this spec look correct? Reply yes to proceed, edit with your changes to revise it, or stop to exit."
Handling responses:
Do not write any source code until the user has explicitly approved the spec.
After spec approval, review the ## Tasks section in the spec and expand it if needed:
[ ] pending, [x] done, [!] blocked.## Tasks section in place.Show the final task list to the user and ask: "Ready to start? Reply yes to begin implementation."
Work through tasks one at a time:
[x] in the spec file as soon as it is complete.[!], explain the blocker clearly, and ask
the user how to proceed. Do not attempt to work around blockers silently.After all tasks are marked [x]:
Step 1 — Check saved config first.
Read .claude/spec-config.yml. If test_command and/or lint_command are present,
use them directly and skip detection for those commands.
Step 2 — Auto-detect any missing commands. For each command not found in config, check the project root in order:
| Indicator | Test command | Lint command |
|-----------|-------------|-------------|
| package.json with scripts.test | npm test | npm run lint (if script exists) |
| Makefile with a test target | make test | make lint (if target exists) |
| Cargo.toml | cargo test | cargo clippy |
| pyproject.toml / pytest.ini / setup.cfg | pytest | ruff check . or flake8 |
| composer.json | composer test | composer lint or vendor/bin/phpcs |
| go.mod | go test ./... | golangci-lint run |
| .github/workflows/*.yml | Parse CI file for test step | Parse CI file for lint step |
If a command still cannot be detected, say: "I couldn't detect a [test/lint] command for this project. Please provide the command to run, or reply skip to skip that check."
Step 3 — Persist newly discovered commands.
After detecting or receiving a command from the user, write it to
.claude/spec-config.yml under test_command / lint_command. Create the file if it
does not exist. This ensures future runs skip detection entirely.
## Definition of Done and tick it if met.## Status line:
ImplementedPartial — <reason>/spec listWhen invoked as /spec list, do not start a new workflow. Instead:
.claude/spec-config.yml to resolve specs_dir (default: .claude/specs/).<specs_dir>/ for all *.md files.## Status line.Spec Status
-------------------------- --------------------------
user-auth-flow Implemented
payment-integration Pending implementation
dark-mode Partial — tests failing
/spec <feature-name> to create the first one.These rules are absolute and must not be bypassed:
## Status or ## Tasks.If the user invokes /spec and a spec file for the same feature already exists:
[ ].
[!] (blocked), surface it first and ask how the user wants
to resolve the blocker before continuing.If /spec is invoked with no argument and multiple incomplete specs exist, list them
all and ask the user to choose which one to resume (or start a new one).
/spec user-auth-flow # starts the workflow for a named feature
/spec # interactive — asks for feature name first
/spec list # show all specs and their statuses
The skill also activates automatically when the user says any of:
An oh-my-claudecode skill that enforces a spec-first workflow for AI-assisted development:
interview → spec → tasks → implement → verify
No source files are created or modified until the user has approved a written spec. The spec file persists as a decision log and enables interrupted sessions to be resumed exactly where they left off.
AI agents that jump straight to coding often produce solutions that solve the wrong problem, miss edge cases, or require expensive rewrites. A short structured spec aligns understanding before any code is written, giving you a checkpoint to catch misunderstandings early and a record of the decisions made.
Copy SKILL.md into your oh-my-claudecode skills directory:
# Global installation (available in all projects)
cp SKILL.md ~/.claude/skills/spec.md
# Project-local installation
cp SKILL.md .claude/skills/spec.md
Restart Claude Code. The skill is now available as /spec.
/spec user-auth-flow # start a spec for a named feature
/spec # interactive — asks for feature name first
/spec list # show all specs and their statuses
The skill also activates automatically when you say:
Claude asks up to five questions (What, Why, Scope, Constraints, Definition of Done). Questions already answered in your message are filled in automatically.
A structured spec file is written to .claude/specs/<feature-name>.md and shown to
you for approval. You can reply yes, edit <changes>, or stop.
The ## Tasks section of the spec is expanded into an ordered, atomic checklist.
Tasks are executed one at a time. Each completed task is marked [x] in the spec.
Commit checkpoints are offered at logical milestones.
Test and lint commands are read from .claude/spec-config.yml if already saved, or
auto-detected (npm, make, cargo, pytest, composer, go, CI workflow files). Discovered
commands are written back to config so future runs skip detection. Results are reported
and the spec status is updated to Implemented or Partial.
Create .claude/spec-config.yml in your project root to customise behaviour:
specs_dir: docs/specs # default: .claude/specs
test_command: npm test # default: auto-detected
lint_command: npm run lint # default: auto-detected
All keys are optional. The skill writes test_command and lint_command automatically
after the first successful detection.
Specs are written to the configured specs_dir...
No comments yet. Be the first to share your thoughts!