by LukasNiessen
Terraform Skill for Claude Code and Codex. LLMs hallucinate a lot with Terraform - TerraShark fixes this. It eliminates hallucinations, is designed for modular and secure code and grounds your IaC in the official Hashicorp Terraform best practices.
# Add to your Claude Code skills
git clone https://github.com/LukasNiessen/terrasharkRun this workflow top to bottom.
Record before writing code:
terraform or tofu) and exact versionIf unknown, state assumptions explicitly.
Select one or more based on user intent and risk:
Primary references:
references/identity-churn.mdreferences/secret-exposure.mdreferences/blast-radius.mdreferences/ci-drift.mdreferences/compliance-gates.mdSupplemental references (only when needed):
references/testing-matrix.mdreferences/quick-ops.mdreferences/examples-good.mdreferences/examples-bad.mdreferences/examples-neutral.mdreferences/coding-standards.mdreferences/module-architecture.mdreferences/ci-delivery-patterns.mdreferences/security-and-governance.mdreferences/do-dont-patterns.mdreferences/mcp-integration.mdFor each fix, include:
When applicable, output:
moved, import strategy)Always provide command sequence tailored to runtime and risk tier. Never recommend direct production apply without reviewed plan and approval.
Return:
LLMs hallucinate a lot when it comes to Terraform. This skill fixes it. It includes best practices for Terraform and OpenTofu - good, bad, and neutral examples so the AI avoids common mistakes. Using TerraShark, the AI keeps proven practices in mind, eliminates hallucinations, and defaults to modular, reusable, security-first design.
Most Terraform skills dump huge text-of-walls onto the agent and burn expensive tokens - with no upside. LLMs don't need the entire Terraform docs again. TerraShark was aggressively de-duplicated and optimized for maximum quality per token.
TerraShark is primarily based on HashiCorp official recommended practices. When guidance conflicts, it prioritizes HashiCorp's recommendations.
Quick Start • Why TerraShark • Token Strategy • What's Included • How It Works • Philosophy
macOS / Linux:
git clone https://github.com/LukasNiessen/terrashark.git ~/.claude/skills/terrashark
Windows (Powershell):
git clone https://github.com/LukasNiessen/terrashark.git "$env:USERPROFILE\.claude\skills\terrashark"
Windows (Command Prompt):
git clone https://github.com/LukasNiessen/terrashark.git "%USERPROFILE%\.claude\skills\terrashark"
That's it. Claude Code auto-discovers skills in ~/.claude/skills/ — no restart needed.
Claude Code has a built-in plugin system with marketplace support. Instead of cloning manually, you can add TerraShark's marketplace and install directly from the CLI:
/plugin marketplace add LukasNiessen/terrashark
/plugin install terrashark
Or use the interactive plugin manager — run /plugin, switch to the Discover tab, and install from there. The marketplace reads the .claude-plugin/marketplace.json in this repo to register TerraShark as an installable plugin.
Codex has no global skill system — setup is per-project. Clone TerraShark into your repo and reference it from your AGENTS.md:
# Clone into your project root
git clone https://github.com/LukasNiessen/terrashark.git .terrashark
Then add to your AGENTS.md (or create one in the repo root):
## Terraform
When working with Terraform or OpenTofu, follow the workflow in `.terrashark/SKILL.md`.
Load references from `.terrashark/references/` as needed.
Done. Now ask Claude Code / Codex any Terraform question. TerraShark responses follow the 7-step failure-mode workflow and include an output contract with assumptions, tradeoffs, and rollback notes.
Invoke explicitly:
/terrashark Create a multi-region S3 module with replication
/terrashark Refactor our EKS stack into separate state files per environment, add moved blocks to avoid recreation, set up a GitHub Actions pipeline with plan on PR and gated apply on merge, and wire in Checkov for compliance scanning
Or just ask naturally — TerraShark activates automatically for any Terraform/OpenTofu task:
Review my main.tf for security issues
Migrate this module from count to for_each
https://github.com/user-attachments/assets/2bc4c9ff-9f54-4a49-8bf0-5cfc0f26dec6
| Dimension | TerraShark | terraform-skill | No Skill | | -------------------------------- | ----------------------------------------------------- | ----------------------------------------------- | ------------ | | SKILL.md activation cost | ~600 tokens | ~4,400 tokens | 0 | | Reference granularity | 18 focused files | 6 large files | — | | Token burn per query | Low (load 1-2 small refs) | High (large refs, e.g. 1,126 lines for modules) | 0 | | Architecture | Failure-mode workflow | Static reference manual | — | | Diagnoses before generating | Yes (Step 2) | No | No | | Output contract | Yes — assumptions, tradeoffs, rollback | No | No | | Migration playbooks | Yes (5 playbooks) | Partial (inline snippets) | No | | Good/bad/neutral examples | Yes (3 dedicated files) | Inline only | No | | Do/Don't checklist | Yes (dedicated file) | Inline only | No | | Compliance framework mapping | Yes (ISO 27001, SOC 2, FedRAMP, GDPR, PCI DSS, HIPAA) | Partial (SOC 2, HIPAA, PCI-DSS) | No | | MCP integration guidance | Yes | No | No | | Hallucination prevention | Core design goal | Not addressed | No | | Security-first defaults | Built-in | Checklist-style | No | | CI/CD templates | GitHub Actions, GitLab CI, Atlantis, Infracost | GitHub Actions, GitLab CI, Atlantis | No | | License | MIT | Apache 2.0 | — |
The key difference is architectural. terraform-skill is a static reference manual: it dumps ~4,400 tokens into context on every activation, then loads additional reference files that can be over 1,000 lines each. It gives Claude information but never tells it how to think about a problem. There's no diagnosis step, no risk assessment, and no structured output — Claude reads the reference and generates whatever it thinks fits.
TerraShark takes the opposite approach. The core SKILL.md is a 79-line operational workflow that costs ~600 tokens on activation — over 7x leaner. Instead of front-loading a wall of text, it forces Claude through a diagnostic sequence: capture context → identify failure modes → load only the relevant references → propose fixes with explicit risk controls → validate → deliver a structured output contract.
This matters for three reasons:
Token efficiency. terraform-skill burns ~4,400 tokens just to activate, before any reference files. A single reference file like module-patterns.md (1,126 lines, ~7,000 tokens) can double the cost again. TerraShark's activation is ~600 tokens, and its 18 granular reference files mean Claude loads only what's needed — typically one or two small, focused docs instead of one massive dump.
Hallucination prevention. terraform-skill provides good patterns but never asks Claude to diagnose what could go wrong. TerraShark's Step 2 forces failure-mode identification before any code is generated. Step 4 requires explicit risk controls for every fix. Step 7 enforces an output contract with assumptions, tradeoffs, and rollback notes. This is the difference between giving someone a cookbook and giving them a diagnostic checklist.
Reference coverage. TerraShark ships 18 focused reference files covering failure modes, migration playbooks, good/bad/neutral examples, do/don't checklists, compliance framework mappings, and MCP integration. terraform-skill has 6 larger files that go deep on testing and module patterns but lack migration playbooks, explicit anti-pattern banks, compliance mappings beyond a few frameworks, and MCP guidance.
In short: TerraShark is the better skill due to 7x leaner activation, failure-mode-first diagnostic workflow, output contracts, granular references, and LLM-specific hallucination prevention. terraform-skill wins on HCL example depth and testing docs, but TerraShark's architecture is fundamentally better designed for the core use case of LLM-assisted IaC generation.
SKILL.md procedural and compactSee references/token-balance-rationale.md for the full decision and tradeoffs.
SKILL.md execution flowNo comments yet. Be the first to share your thoughts!