by NadirRouter
Open-source LLM router & AI cost optimizer. Routes simple prompts to cheap/local models, complex ones to premium — automatically. Drop-in OpenAI-compatible proxy for Claude Code, Codex, Cursor, OpenClaw. Saves 40-70% on AI API costs. Self-hosted, no middleman.
# Add to your Claude Code skills
git clone https://github.com/NadirRouter/NadirClawMost LLM requests don't need a premium model. In typical coding sessions, 60-70% of prompts are simple — reading files, short questions, formatting. They can be handled by models that cost 10-20x less.
$ nadirclaw serve
✓ Classifier ready — Listening on localhost:8856
SIMPLE "What is 2+2?" → gemini-flash $0.0002
SIMPLE "Format this JSON" → haiku-4.5 $0.0004
COMPLEX "Refactor auth module..." → claude-sonnet $0.098
COMPLEX "Debug race condition..." → gpt-5.2 $0.450
SIMPLE "Write a docstring" → gemini-flash $0.0002
3 of 5 routed cheaper · $0.549 vs $1.37 all-premium · 60% saved
Your keys. Your models. No middleman. NadirClaw runs locally and routes directly to providers. No third-party proxy, no subsidized tokens, no platform that can pull the plug on you. Why this matters.
pip install nadirclaw
Or install from source:
curl -fsSL https://raw.githubusercontent.com/doramirdor/NadirClaw/main/install.sh | sh
Then run the interactive setup wizard:
No comments yet. Be the first to share your thoughts!
nadirclaw setup
This guides you through selecting providers, entering API keys, and choosing models for each routing tier. Then start the router:
nadirclaw serve --verbose
That's it. NadirClaw starts on http://localhost:8856 with sensible defaults (Gemini 3 Flash for simple, OpenAI Codex for complex). If you skip nadirclaw setup, the serve command will offer to run it on first launch.
off (default), safe (lossless), aggressive (future). See savings analysisNADIRCLAW_TIER_THRESHOLDS); set NADIRCLAW_MID_MODEL for a cost-effective middle tierauto, eco, premium, free, reasoning — choose your cost/quality strategy per requestsonnet, flash, gpt4 instead of full model IDsnadirclaw auth <provider> login (OpenAI, Anthropic, Google), no API key needednadirclaw report with per-model and per-day cost breakdown (--by-model --by-day), anomaly flagging, filters, latency stats, tier breakdown, and token usagenadirclaw export --format csv|jsonl --since 7d for offline analysis in spreadsheets or data tools--log-raw flag to capture full request/response content for debugging and replay/metrics endpoint with request counts, latency histograms, token/cost totals, cache hits, and fallback tracking (zero extra dependencies)pip install nadirclaw[telemetry])nadirclaw savings shows exactly how much money you've saved, with monthly projectionsnadirclaw budget, optional webhook and stdout notificationsNADIRCLAW_CACHE_TTL and NADIRCLAW_CACHE_MAX_SIZE. Monitor with nadirclaw cache or the /v1/cache endpointnadirclaw dashboard for terminal, or visit http://localhost:8856/dashboard for a web UI with real-time stats, cost tracking, and model usagedoramirdor/nadirclaw-action for CI/CD pipelinesMonitor your routing in real-time with nadirclaw dashboard:
Install the dashboard extras: pip install nadirclaw[dashboard]
nadirclaw auth openai login, nadirclaw auth anthropic login, nadirclaw auth antigravity login, nadirclaw auth gemini login)curl -fsSL https://raw.githubusercontent.com/doramirdor/NadirClaw/main/install.sh | sh
This clones the repo to ~/.nadirclaw, creates a virtual environment, installs dependencies, and adds nadirclaw to your PATH. Run it again to update.
git clone https://github.com/doramirdor/NadirClaw.git
cd NadirClaw
python3 -m venv venv
source venv/bin/activate
pip install -e .
rm -rf ~/.nadirclaw
sudo rm -f /usr/local/bin/nadirclaw
Run NadirClaw + Ollama with zero cost, fully local:
git clone https://github.com/doramirdor/NadirClaw.git && cd NadirClaw
docker compose up
This starts Ollama and NadirClaw on port 8856. Pull a model once it's running:
docker compose exec ollama ollama pull llama3.1:8b
To use premium models alongside Ollama, create a .env file with your API keys and model config (see .env.example), then restart.
To run NadirClaw standalone (without Ollama):
docker build -t nadirclaw .
docker run -p 8856:8856 --env-file .env nadirclaw
NadirClaw loads configuration from ~/.nadirclaw/.env. Create or edit this file to set API keys and model preferences:
# ~/.nadirclaw/.env
# API keys (set the ones you use)
GEMINI_API_KEY=AIza...
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
# Model routing
NADIRCLAW_SIMPLE_MODEL=gemini-3-flash-preview
NADIRCLAW_COMPLEX_MODEL=gemini-2.5-pro
# Server
NADIRCLAW_PORT=8856
If ~/.nadirclaw/.env does not exist, NadirClaw falls back to .env in the current directory.
NadirClaw supports multiple ways to provide LLM credentials, checked in this order: