by erans
LunaRoute is a high-performance local proxy for AI coding assistants like Claude Code, OpenAI Codex CLI, and OpenCode. Get complete visibility into every LLM interaction with zero-overhead passthrough, comprehensive session recording, and powerful debugging capabilities.
# Add to your Claude Code skills
git clone https://github.com/erans/lunaroute ___---___
.-- --.
./ () .-. \.
/ o . ( ) \
/ . '-' \ _ ____ _
| () . O . | | | _ _ _ __ __ _| _ \ ___ _ _| |_ ___
| | | | | | | | '_ \ / _` | |_) / _ \| | | | __/ _ \
| o () | | |__| |_| | | | | (_| | _ < (_) | |_| | || __/
| .--. O | |_____\__,_|_| |_|\__,_|_| \_\___/ \__,_|\__\___|
| . | | |
\ `.__.' o . /
\ /
`\ o () /
`--___ ___--'
---
Your AI Coding Assistant's Best Friend
A blazing-fast local proxy for AI coding assistants that gives you complete visibility into every LLM interaction. Zero configuration, sub-millisecond overhead, and powerful debugging capabilities.
eval $(lunaroute-server env)
That's it! This single command:
ANTHROPIC_BASE_URL)OPENAI_BASE_URL)Start coding with your AI assistant immediately - both APIs are ready to use!
If you prefer to run the server manually:
# Terminal 1: Start the server
lunaroute-server
# Terminal 2: Point your AI tools to it
export ANTHROPIC_BASE_URL=http://localhost:8081 # For Claude Code
export OPENAI_BASE_URL=http://localhost:8081/v1 # For Codex CLI
No comments yet. Be the first to share your thoughts!
That's it. No API keys to configure, no YAML files to write, nothing. LunaRoute automatically:
gpt-* β OpenAI, claude-* β Anthropic)Stop flying blind. LunaRoute records every interaction with zero configuration:
# Literally just one command
eval $(lunaroute-server env)
What you get instantly:
/v1/chat/completions + Anthropic /v1/messageshttp://localhost:8082How it works:
$ lunaroute-server env
export ANTHROPIC_BASE_URL=http://127.0.0.1:8081
export OPENAI_BASE_URL=http://127.0.0.1:8081/v1
# LunaRoute server started on http://127.0.0.1:8081
# Web UI available at http://127.0.0.1:8082
$ eval $(lunaroute-server env) # Sets env vars and starts server
$ # Now use your AI tools normally - they're automatically configured!
Download the latest release for your platform from GitHub Releases:
# Linux/macOS: Extract and run
tar -xzf lunaroute-server-*.tar.gz
chmod +x lunaroute-server
# Optional: Add to PATH for global access
sudo mv lunaroute-server /usr/local/bin/
# Start using it immediately!
eval $(lunaroute-server env)
git clone https://github.com/erans/lunaroute.git
cd lunaroute
cargo build --release --package lunaroute-server
# Binary location: target/release/lunaroute-server
# Start using it!
eval $(./target/release/lunaroute-server env)
The fastest way to get started - automatically starts server and configures your shell:
# One command does everything!
eval $(lunaroute-server env)
# Now use your AI tools - they're automatically configured
# Both Claude Code and Codex CLI work immediately
What happens:
ANTHROPIC_BASE_URL set to http://127.0.0.1:8081OPENAI_BASE_URL set to http://127.0.0.1:8081/v1http://127.0.0.1:8082Custom port:
eval $(lunaroute-server env --port 8090)
Stop the server:
pkill -f "lunaroute-server serve"
If you prefer to manage the server yourself:
# Terminal 1: Start LunaRoute
lunaroute-server
# Terminal 2: Configure your shell
export ANTHROPIC_BASE_URL=http://localhost:8081
export OPENAI_BASE_URL=http://localhost:8081/v1
# Now use Claude Code or Codex CLI
# View sessions at http://localhost:8082
What you get out of the box:
β OpenAI provider enabled (no API key - will use client auth)
β Anthropic provider enabled (no API key - will use client auth)
π‘ API dialect: Both (OpenAI + Anthropic)
β‘ Dual passthrough mode: OpenAIβOpenAI + AnthropicβAnthropic (no normalization)
- gpt-* models β OpenAI provider (passthrough)
- claude-* models β Anthropic provider (passthrough)
π Bypass enabled for unknown API paths
π Session recording enabled (SQLite + JSONL)
π¨ Web UI available at http://localhost:8082
Need more control? Use a config file:
# Save as config.yaml
host: "127.0.0.1"
port: 8081
api_dialect: "both" # Already the default!
providers:
openai:
enabled: true
base_url: "https://api.openai.com/v1" # Or use ChatGPT backend
# api_key: "sk-..." # Optional: defaults to OPENAI_API_KEY env var
anthropic:
enabled: true
# api_key: "sk-ant-..." # Optional: defaults to ANTHROPIC_API_KEY env var
session_recording:
enabled: true
sqlite:
enabled: true
path: "~/.lunaroute/sessions.db"
jsonl:
enabled: true
directory: "~/.lunaroute/sessions"
retention:
max_age_days: 30
max_size_mb: 1024
ui:
enabled: true
host: "127.0.0.1"
port: 8082
lunaroute-server --config config.yaml
LunaRoute accepts both OpenAI and Anthropic formats simultaneously with zero normalization:
/v1/chat/completions β routes to OpenAI/v1/messages β routes to AnthropicTrack everything that matters with dual storage:
SQLite Database - Fast queries and analytics:
SELECT model_used, COUNT(*), SUM(input_tokens), SUM(output_tokens)
FROM sessions
WHERE started_at > datetime('now', '-7 days')
GROUP BY model_used;
JSONL Logs - Human-readable, full request/response data:
# Watch live sessions
tail -f ~/.lunaroute/sessions/$(date +%Y-%m-%d)/session_*.jsonl | jq
# Search for specific content
grep -r "TypeError" ~/.lunaroute/sessions/
Get detailed breakdowns on shutdown or via API:
π Session Statistics Summary
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Session: 550e8400-e29b-41d4-a716-446655440000
Requests: 5
Input tokens: 2,450
Output tokens: 5,830
Thinking tokens: 1,200
Total tokens: 9,480
Tool usage:
Read: 12 calls (avg 45ms)
Write: 8 calls (avg 120ms)
Bash: 3 calls (avg 850ms)
Performance:
Avg response time: 2.3s
Proxy overhead: 12ms total (0.5%)
Provider latency: 2.288s (99.5%)
π° Estimated cost: $0.14 USD
Protect sensitive data automatically before it hits disk:
session_recording:
pii:
enabled: true
detect_email: true
detect_phone: true
detect_ssn: true
detect_credit_card: true
redaction_mode: "tokenize" # mask, remove, tokenize, or partial
Before: My email is john.doe@example.com and SSN is 123-45-6789
After: My email is [EMAIL:a3f8e9d2] and SSN is [SSN:7b2c4f1a]
24 metric types at /metrics:
Perfect for Grafana dashboards.
Built-in web interface for browsing sessions:
# Automatically available at http://localhost:8082
lunaroute-server
Features:
env CommandThe env command starts the server in the background and outputs shell commands to configure your environment:
# Basic usage - starts server on default port 8081
eval $(lunaroute-server env)
# Custom port
eval $(lunaroute-server env --port 8090)
# Custom host and por