by ZhangHanDong
A high-performance Rust implementation of an OpenAI-compatible API gateway for Claude Code CLI.
# Add to your Claude Code skills
git clone https://github.com/ZhangHanDong/claude-code-api-rsNEW: LLM Proxy — use CC subscription as direct LLM | WebSocket Reconnection | Python SDK v0.1.55 Full Parity
cc-sdk is a community-driven Rust SDK for Claude Code CLI:
llm::query("prompt") → text (bypasses agent layer, uses CC subscription)max_budget_usd, cache token trackinguse cc_sdk::llm;
#[tokio::main]
async fn main() -> cc_sdk::Result<()> {
// Simple: use CC subscription as LLM proxy
let response = llm::query("Explain quantum entanglement", None).await?;
println!("{}", response.text);
Ok(())
}
use cc_sdk::{query, ClaudeCodeOptions};
use futures::StreamExt;
#[tokio::main]
async fn main() -> cc_sdk::Result<()> {
// Full agent mode with streaming
let options = ClaudeCodeOptions::builder()
.model("sonnet")
.max_budget_usd(10.0)
.build();
let mut stream = query("Hello, Claude!", Some(options)).await?;
while let Some(msg) = stream.next().await {
println!("{:?}", msg?);
}
Ok(())
}
No comments yet. Be the first to share your thoughts!
A high-performance Rust implementation of an OpenAI-compatible API gateway for Claude Code CLI. Built on top of the robust cc-sdk, this project provides a RESTful API interface that allows you to interact with Claude Code using the familiar OpenAI API format.
Option 1: Install from crates.io
cargo install claude-code-api
Then run:
RUST_LOG=info claude-code-api
# or use the short alias
RUST_LOG=info ccapi
Option 2: Build from source
git clone https://github.com/ZhangHanDong/claude-code-api-rs.git
cd claude-code-api-rs
Build the entire workspace (API server + SDK):
cargo build --release
Start the server:
./target/release/claude-code-api
Note: The API server automatically includes and uses claude-code-sdk-rs for all Claude Code CLI interactions.
The API server will start on http://localhost:8080 by default.
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "claude-opus-4-5-20251101",
"messages": [
{"role": "user", "content": "Hello, Claude!"}
]
}'
| Model | ID | Alias |
|-------|-----|-------|
| Opus 4.6 | claude-opus-4-6 | "opus" |
| Sonnet 4.6 | claude-sonnet-4-6 | "sonnet" |
| Haiku 4.5 | claude-haiku-4-5-20251001 | "haiku" |
# Use aliases — they always point to the latest version
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "sonnet", "messages": [{"role": "user", "content": "Hello"}]}'
import openai
# Configure the client to use Claude Code API
client = openai.OpenAI(
base_url="http://localhost:8080/v1",
api_key="not-needed" # API key is not required
)
response = client.chat.completions.create(
model="opus", # or "sonnet" for faster responses
messages=[
{"role": "user", "content": "Write a hello world in Python"}
]
)
print(response.choices[0].message.content)
Maintain context across multiple requests:
# First request - creates a new conversation
response = client.chat.completions.create(
model="sonnet-4",
messages=[
{"role": "user", "content": "My name is Alice"}
]
)
conversation_id = response.conversation_id
# Subsequent request - continues the conversation
response = client.chat.completions.create(
model="sonnet-4",
conversation_id=conversation_id,
messages=[
{"role": "user", "content": "What's my name?"}
]
)
# Claude will remember: "Your name is Alice"
Process images with text:
response = client.chat.completions.create(
model="claude-opus-4-20250514",
messages=[{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{"type": "image_url", "image_url": {"url": "/path/to/image.png"}}
]
}]
)
Supported image formats:
stream = client.chat.completions.create(
model="claude-opus-4-20250514",
messages=[{"role": "user", "content": "Write a long story"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
Enable Claude to access external tools and services:
# Create MCP configuration
cat > mcp_config.json << EOF
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/allowed/path"]
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "your-token"
}
}
}
}
EOF
# Start with MCP support
export CLAUDE_CODE__MCP__ENABLED=true
export CLAUDE_CODE__MCP__CONFIG_FILE="./mcp_config.json"
./target/release/claude-code-api
Use tools for AI integrations:
response = client.chat.completions.create(
model="claude-3-5-haiku-20241022",
messages=[
{"role": "user", "content": "Please preview this URL: https://rust-lang.org"}
],
tools=[
{
"type": "function",
"function": {
"name": "url_preview",
"description": "Preview a URL and extract its content",
"parameters": {
"type": "object",
"properties": {
"url": {"type": "string", "description": "The URL to preview"}
},
"required": ["url"]
}
}
}
],
tool_choice="auto"
)
# Response will include tool_calls:
# {
# "choices": [{
# "message": {
# "role": "assistant",
# "tool_calls": [{
# "id": "call_xxx",
# "type": "function",
# "function": {
# "name": "url_preview",
# "arguments": "{\"url\": \"https://rust-lang.org\"}"
# }
# }]
# }
# }]
# }
This feature enables seamless integration with modern AI tools like url-preview and other OpenAI-compatible tool chains. url-preview v0.6.0+ uses this exact format to extract structured data from web pages using Claude.
allowed_tools / disallowed_tools and permission_mode in ClaudeCodeOptions to whitelist/blacklist and choose approval behavior.CanUseTool to decide {allow, input?/reason?} per tool call.config/*.toml or mcp_config.json and use scripts under script/ (e.g., start_with_mcp.sh). The API reuses the SDK’s MCP wiring.agents and setting_sources to pass