by alexgreensh
Find the ghost tokens. Fix them. Survive compaction. Avoid context quality decay.
# Add to your Claude Code skills
git clone https://github.com/alexgreensh/token-optimizerGuides for using ai agents skills like token-optimizer.
Last scanned: 5/4/2026
{
"issues": [],
"status": "PASSED",
"scannedAt": "2026-05-04T06:42:27.506Z",
"semgrepRan": false,
"npmAuditRan": true,
"pipAuditRan": true
}Recommended on every platform (macOS, Linux, Windows):
/plugin marketplace add alexgreensh/token-optimizer
/plugin install token-optimizer@alexgreensh-token-optimizer
Then in Claude Code: /token-optimizer
Please enable auto-update after installing. Claude Code ships third-party marketplaces with auto-update off by default, and plugin authors cannot change that default. So you won't get bug fixes automatically unless you turn it on. In Claude Code:
/plugin→ Marketplaces tab → selectalexgreensh-token-optimizer→ Enable auto-update. One-time, 10 seconds, and you'll never miss a fix again. Token Optimizer also prints a one-time reminder on your first SessionStart so you don't forget.
The plugin install above is the only path you should use on Windows. Do not also run the install.sh script described below — that's a bash installer for macOS/Linux/WSL, and combining the two creates an EBUSY: resource busy or locked error because Git Bash holds Windows file handles open while the plugin system is trying to clone.
Repo size note: our repo is ~3 MB (218 files, ~2,700 git objects). If your /plugin marketplace add attempt seems to be downloading gigabytes, it's not us — cancel and check whether Claude Code is cloning a different URL or network state. You can verify by cloning manually: git clone --bare https://github.com/alexgreensh/token-optimizer.git should finish in under a second and produce a ~2.6 MB directory.
If you've already hit the EBUSY error:
git.exe processes.C:\Users\<you>\.claude\token-optimizerC:\Users\<you>\.claude\plugins\marketplaces\alexgreensh-token-optimizer/plugin commands above.Manual ZIP fallback (if plugin install repeatedly fails): download the repo ZIP (~800 KB), extract to C:\Users\<you>\.claude\token-optimizer\, then run python measure.py setup-quality-bar from that directory. Note: on Windows the command is python, not python3.
If you prefer a script-managed install on macOS or Linux, this works too and auto-updates daily via git pull --ff-only. Do not run this on Windows, and do not run it alongside the plugin install above on any platform. Pick one method.
git clone https://github.com/alexgreensh/token-optimizer.git ~/.claude/token-optimizer
bash ~/.claude/token-optimizer/install.sh
Works on Claude Code and OpenClaw. Each platform has its own native plugin (Python for Claude Code, TypeScript for OpenClaw). No bridging, no shared runtime, zero cross-platform dependencies.
Token Optimizer works on OpenAI Codex (CLI and Desktop). Same core engine, adapted for AGENTS.md, GPT-5.x models, and Codex's hook surface. This is a beta -- core audit, coaching, dashboard, and fleet scanning work. Some advanced features (Delta Mode, Structure Map, invisible Bash compression) are waiting on upstream Codex hook parity.
codex plugin marketplace add alexgreensh/token-optimizer
Then in the Codex TUI: /plugins and install Token Optimizer. Ask for it conversationally: "Run Token Optimizer".
After install, set up hooks and the bookmarkable dashboard:
TOKEN_OPTIMIZER_RUNTIME=codex python3 skills/token-optimizer/scripts/measure.py codex-install --project "$PWD"
TOKEN_OPTIMIZER_RUNTIME=codex python3 skills/token-optimizer/scripts/measure.py setup-daemon
Dashboard: http://localhost:24843/token-optimizer (separate port from Claude Code's 24842, both can run side by side).
Auto-updates on startup via git ls-remote. Manual: codex plugin marketplace upgrade.
See docs/codex-beta.md for the full feature parity table, hook profiles, and Codex model pricing.
Native TypeScript plugin for OpenClaw agent systems. Zero Python dependency, zero runtime dependencies, zero telemetry. Works with any model your gateway is configured against: Claude, GPT-5, Gemini, DeepSeek, local via Ollama.
# From GitHub (recommended)
openclaw plugins install github:alexgreensh/token-optimizer
# From ClawHub
openclaw plugins install token-optimizer
Inside OpenClaw, run /token-optimizer for a guided audit with coaching.
See openclaw/README.md for full docs.
Most tools tell you your context is full. Token Optimizer shows you exactly where every token went, how much each turn cost, which skills and MCP servers actually fired, and which ones are just sitting there eating your budget.

One single-file HTML dashboard. Auto-regenerates after every session via the SessionEnd hook. Bookmark http://localhost:24842/token-optimizer and it's always current. Zero tokens from your context, zero network calls, zero setup after install.
1h vs 5m) and hit rate alongsideNo comments yet. Be the first to share your thoughts!