SkillsLLM
CategoriesBlogAI NewsAbout
HomeAI Agentsspec-skill

spec-skill

by revagomes

Pending

Spec-driven development workflow skill for oh-my-claudecode

0stars
0forks
Shell
Added 3/22/2026
View on GitHubDownload ZIP
AI AgentsSKILL.mdagentic-developmentai-skillclaude-codellm-workflowoh-my-claudecodespec-driven-development
Installation
# Add to your Claude Code skills
git clone https://github.com/revagomes/spec-skill
SKILL.md

name: spec description: > Spec-driven development workflow. Runs a structured interview, writes a spec file to a configurable specs directory (default: .claude/specs/), generates a task list, implements incrementally, and verifies outcomes. Use when the user says "write a spec", "spec-driven", "let's plan first", "spec first then implement", "create a plan before coding", or invokes /spec directly. version: 1.2.0

Spec-Driven Development

You are running the spec skill. Your job is to enforce a spec-first workflow: interview → spec → tasks → implement → verify. No source files are created or modified until the user has approved a written spec.


Configuration

The skill reads an optional config file at .claude/spec-config.yml in the project root. Create it to override defaults:

# .claude/spec-config.yml
specs_dir: docs/specs        # default: .claude/specs
test_command: npm test       # default: auto-detected
lint_command: npm run lint   # default: auto-detected

If the file does not exist, all values fall back to their defaults. The skill will write test_command and lint_command into this file automatically the first time it detects or is told the commands (see Phase 5), so future runs skip detection.


Quickstart Check

Before Phase 1, assess whether the request is trivial:

  • Trivial = can be described in one sentence AND touches fewer than 3 files AND requires no architectural decisions (e.g., "rename this variable", "fix this typo", "add a null check").
  • Non-trivial = everything else.

If trivial, say: "This looks like a small change. Want to use a lightweight checklist instead of a full spec? Reply quick for a lightweight mode, or full to run the complete spec workflow."

  • quick mode: Skip directly to a minimal spec with only Overview, Tasks, and Definition of Done sections. No interview. Present it and ask for approval.
  • full mode (default): Continue with Phase 1 as described below.

Phase 1 — Requirements Interview

Run git log --oneline -10 and ls the project root. Note the top-level structure and the last few commits. Do not recursively explore the project tree. Mention what you observe so the user can confirm or correct your understanding of the context.

Then ask only the questions the user has not already answered. Number them clearly:

  1. What — What is the feature or change? (1–3 sentences)
  2. Why — What problem does it solve, or what goal does it serve?
  3. Scope — Which files, modules, or systems are involved?
  4. Constraints — Any technical, architectural, or timeline constraints?
  5. Definition of Done — How will success be verified? (tests, manual check, etc.)

If the user's initial message already answers some questions, fill those in automatically and ask only the remainder.

Wait for the user's answers before proceeding to Phase 2.

Complexity signal: If the answers suggest more than 20 discrete tasks, say so and suggest splitting the work into multiple, sequential specs before continuing.


Phase 2 — Spec Document Generation

Naming and conflict handling

Derive a short, descriptive name from the feature title. Use kebab-case lowercase (e.g., user-auth-flow.md).

Read .claude/spec-config.yml if it exists and use the specs_dir value. Fall back to .claude/specs/ if the file is absent or the key is not set. Write the spec to:

<specs_dir>/<feature-name>.md

Before writing, check whether a spec file already exists at that path:

  • File exists, Status is Pending implementation: Read it. Summarise what it contains and ask: "A spec for this feature already exists with incomplete tasks. Reply resume to continue it, revise to update it with new requirements, or new to create a separate spec under a different name."
  • File exists, Status is Implemented: Say so. Ask: "This feature was already implemented. Are you starting a follow-up iteration? Reply yes to create <feature-name>-v2.md, or provide a new name."
  • No file exists: Proceed to generate the spec.

Spec template

The spec must follow this exact template:

# Spec: <Feature Title>

## Status: Pending implementation

---

## Overview

<1–3 sentence description of what this feature does and why it exists.>

---

## Motivation

<Why is this needed? What problem does it solve?>

---

## Scope

### In Scope
- <item>

### Out of Scope
- <item>

---

## Technical Design

### Files to Create

| File | Purpose |
|------|---------|
| `path/to/file` | Description |

### Files to Modify

| File | Changes |
|------|---------|
| `path/to/file` | Description |

### Interfaces / Contracts

<Key interfaces, method signatures, data shapes, or API contracts. Write "N/A" if none.>

### Dependencies

<Other modules, services, or external systems this feature depends on.>

---

## Definition of Done

- [ ] <Verifiable condition 1>
- [ ] <Verifiable condition 2>
- [ ] Tests pass (or: no automated tests exist — manually verified)
- [ ] Linting passes (or: no linter configured)

---

## Risks and Open Questions

| Risk | Mitigation |
|------|-----------|
| <risk> | <mitigation> |

---

## Tasks

<!-- Tasks are ordered by dependency. Later tasks may depend on earlier ones. -->
<!-- Statuses: [ ] pending  [x] done  [!] blocked -->

- [ ] 1. <Task title>
- [ ] 2. <Task title>

Approval loop

After writing the file, display its full contents to the user and ask:

"Does this spec look correct? Reply yes to proceed, edit with your changes to revise it, or stop to exit."

Handling responses:

  • yes (or any clear affirmative like "looks good", "sure", "go ahead"): proceed to Phase 3.
  • edit <instructions>: apply the requested changes to the spec file, re-display the full updated spec, and repeat this approval prompt. Do not proceed until the user says yes.
  • edit (with no instructions): ask "What would you like to change?" and wait.
  • stop: exit the skill. Leave the spec file in place as a record.
  • ambiguous response: ask "Just to confirm — shall I proceed with implementation, or would you like to make changes first?"

Do not write any source code until the user has explicitly approved the spec.


Phase 3 — Task List

After spec approval, review the ## Tasks section in the spec and expand it if needed:

  • Each task must represent one logical concern. Prefer single-file changes; when a change inherently requires 2–3 coordinated edits (e.g., an interface and its implementation, a route and its handler), group them into one task and list all affected files in the task title.
  • Order by dependency — earlier tasks unblock later ones.
  • Use statuses: [ ] pending, [x] done, [!] blocked.
  • Update the spec file's ## Tasks section in place.

Show the final task list to the user and ask: "Ready to start? Reply yes to begin implementation."


Phase 4 — Incremental Implementation

Work through tasks one at a time:

  1. Mark the task [x] in the spec file as soon as it is complete.
  2. Read existing files before editing — never modify blindly.
  3. When a group of tasks together produce a testable or runnable state (typically every 3–5 tasks), offer to commit with a suggested message. Wait for the user to confirm before committing — do not commit automatically.
  4. If you discover a blocker, mark the task [!], explain the blocker clearly, and ask the user how to proceed. Do not attempt to work around blockers silently.
  5. Never skip tasks or reorder them without telling the user why.

Phase 5 — Verification

After all tasks are marked [x]:

Detecting test and lint commands

Step 1 — Check saved config first. Read .claude/spec-config.yml. If test_command and/or lint_command are present, use them directly and skip detection for those commands.

Step 2 — Auto-detect any missing commands. For each command not found in config, check the project root in order:

| Indicator | Test command | Lint command | |-----------|-------------|-------------| | package.json with scripts.test | npm test | npm run lint (if script exists) | | Makefile with a test target | make test | make lint (if target exists) | | Cargo.toml | cargo test | cargo clippy | | pyproject.toml / pytest.ini / setup.cfg | pytest | ruff check . or flake8 | | composer.json | composer test | composer lint or vendor/bin/phpcs | | go.mod | go test ./... | golangci-lint run | | .github/workflows/*.yml | Parse CI file for test step | Parse CI file for lint step |

If a command still cannot be detected, say: "I couldn't detect a [test/lint] command for this project. Please provide the command to run, or reply skip to skip that check."

Step 3 — Persist newly discovered commands. After detecting or receiving a command from the user, write it to .claude/spec-config.yml under test_command / lint_command. Create the file if it does not exist. This ensures future runs skip detection entirely.

Verification steps

  1. Run the detected (or user-provided) test command.
  2. Run the detected (or user-provided) lint command.
  3. Check each item in ## Definition of Done and tick it if met.
  4. Update the spec ## Status line:
    • All items met → Implemented
    • Some items unmet → Partial — <reason>
  5. Report the outcome to the user with a summary of what passed, what failed, and any items that remain open.

/spec list

When invoked as /spec list, do not start a new workflow. Instead:

  1. Read .claude/spec-config.yml to resolve specs_dir (default: .claude/specs/).
  2. Scan <specs_dir>/ for all *.md files.
  3. For each file, read the ## Status line.
  4. Display a summary table:
Spec                        Status
--------------------------  --------------------------
user-auth-flow              Implemented
payment-integration         Pending implementation
dark-mode                   Partial — tests failing
  1. If the directory does not exist or contains no spec files, say so and suggest running /spec <feature-name> to create the first one.

Guardrails

These rules are absolute and must not be bypassed:

  • No source code before spec approval. The only file you may create before the user says yes is the spec file itself in the configured specs directory.
  • Spec files are never deleted. They are a permanent decision log. Only update ## Status or ## Tasks.
  • Each phase ends with a user checkpoint. Never chain phases without pausing.
  • Verification is not optional. Do not declare success without attempting tests/lint (or confirming with the user that they do not apply).
  • Resumability. If a session is interrupted, read the spec file to find the last completed task and resume from there. Never restart from scratch.

Resuming an Interrupted Session

If the user invokes /spec and a spec file for the same feature already exists:

  1. Read the spec file.
  2. Find the first task still marked [ ].
    • If a task is marked [!] (blocked), surface it first and ask how the user wants to resolve the blocker before continuing.
  3. Summarise what has been done and what remains.
  4. Ask: "Shall I continue from task N — <title>?"

If /spec is invoked with no argument and multiple incomplete specs exist, list them all and ask the user to choose which one to resume (or start a new one).


Example Invocations

/spec user-auth-flow       # starts the workflow for a named feature
/spec                      # interactive — asks for feature name first
/spec list                 # show all specs and their statuses

The skill also activates automatically when the user says any of:

  • "write a spec for …"
  • "let's plan first"
  • "spec-driven approach"
  • "spec first, then implement"
  • "create a plan before coding"
README.md

spec — Spec-Driven Development Skill for Claude Code

An oh-my-claudecode skill that enforces a spec-first workflow for AI-assisted development:

interview → spec → tasks → implement → verify

No source files are created or modified until the user has approved a written spec. The spec file persists as a decision log and enables interrupted sessions to be resumed exactly where they left off.


Why spec-first?

AI agents that jump straight to coding often produce solutions that solve the wrong problem, miss edge cases, or require expensive rewrites. A short structured spec aligns understanding before any code is written, giving you a checkpoint to catch misunderstandings early and a record of the decisions made.


Installation

Copy SKILL.md into your oh-my-claudecode skills directory:

# Global installation (available in all projects)
cp SKILL.md ~/.claude/skills/spec.md

# Project-local installation
cp SKILL.md .claude/skills/spec.md

Restart Claude Code. The skill is now available as /spec.


Usage

/spec user-auth-flow       # start a spec for a named feature
/spec                      # interactive — asks for feature name first
/spec list                 # show all specs and their statuses

The skill also activates automatically when you say:

  • "write a spec for …"
  • "let's plan first"
  • "spec-driven approach"
  • "spec first, then implement"
  • "create a plan before coding"

Workflow

Phase 1 — Interview

Claude asks up to five questions (What, Why, Scope, Constraints, Definition of Done). Questions already answered in your message are filled in automatically.

Phase 2 — Spec Generation

A structured spec file is written to .claude/specs/<feature-name>.md and shown to you for approval. You can reply yes, edit <changes>, or stop.

Phase 3 — Task List

The ## Tasks section of the spec is expanded into an ordered, atomic checklist.

Phase 4 — Implementation

Tasks are executed one at a time. Each completed task is marked [x] in the spec. Commit checkpoints are offered at logical milestones.

Phase 5 — Verification

Test and lint commands are read from .claude/spec-config.yml if already saved, or auto-detected (npm, make, cargo, pytest, composer, go, CI workflow files). Discovered commands are written back to config so future runs skip detection. Results are reported and the spec status is updated to Implemented or Partial.


Configuration

Create .claude/spec-config.yml in your project root to customise behaviour:

specs_dir: docs/specs        # default: .claude/specs
test_command: npm test       # default: auto-detected
lint_command: npm run lint   # default: auto-detected

All keys are optional. The skill writes test_command and lint_command automatically after the first successful detection.


Spec file location

Specs are written to the configured specs_dir...

Comments (0)
to leave a comment.

No comments yet. Be the first to share your thoughts!

Related Skills

gemini-cli

by google-gemini

An open-source AI agent that brings the power of Gemini directly into your terminal.
98,624
12,496
TypeScript
AI Agentsaiai-agents
View details
everything-claude-code

by affaan-m

The agent harness performance optimization system. Skills, instincts, memory, security, and research-first development for Claude Code, Codex, Opencode, Cursor and beyond.
94,747
12,402
JavaScript
AI Agentsai-agentsanthropic
View details
ui-ux-pro-max-skill

by nextlevelbuilder

An AI SKILL that provide design intelligence for building professional UI/UX multiple platforms
47,554
4,627
Python
CLI Toolsai-skillsantigravity
View details
awesome-claude-skills

by ComposioHQ

A curated list of awesome Claude Skills, resources, and tools for customizing Claude AI workflows
46,719
4,767
Python
AI Agentsagent-skillsai-agents
View details
chatgpt-on-wechat

by zhayujie

CowAgent是基于大模型的超级AI助理,能主动思考和任务规划、访问操作系统和外部资源、创造和执行Skills、拥有长期记忆并不断成长。同时支持飞书、钉钉、企微、QQ、微信公众号、网页等接入,可选择OpenAI/Claude/Gemini/DeepSeek/ Qwen/GLM/Kimi/LinkAI,能处理文本、语音、图片和文件,可快速搭建个人AI助手和企业数字员工。
42,355
9,842
Python
AI Agentsaiai-agent
View details
oh-my-openagent

by code-yeongyu

omo; the best agent harness - previously oh-my-opencode
42,221
3,144
TypeScript
AI Agentsaiai-agents
View details