by Gitlawb
runs anywhere. uses anything
# Add to your Claude Code skills
git clone https://github.com/Gitlawb/openclaudeLast scanned: 4/30/2026
{
"issues": [
{
"type": "npm-audit",
"message": "@anthropic-ai/sdk: Claude SDK for TypeScript has Insecure Default File Permissions in Local Filesystem Memory Tool",
"severity": "medium"
},
{
"type": "npm-audit",
"message": "@mendable/firecrawl-js: Vulnerability found",
"severity": "medium"
},
{
"type": "npm-audit",
"message": "axios: Axios has a NO_PROXY Hostname Normalization Bypass that Leads to SSRF",
"severity": "medium"
},
{
"type": "npm-audit",
"message": "gaxios: Vulnerability found",
"severity": "medium"
},
{
"type": "npm-audit",
"message": "uuid: uuid: Missing buffer bounds check in v3/v5/v6 when buf is provided",
"severity": "medium"
}
],
"status": "PASSED",
"scannedAt": "2026-04-30T06:27:25.529Z",
"semgrepRan": false,
"npmAuditRan": true,
"pipAuditRan": true
}OpenClaude is an open-source coding-agent CLI for cloud and local model providers.
Use OpenAI-compatible APIs, Gemini, GitHub Models, Codex OAuth, Codex, Ollama, Atomic Chat, and other supported backends while keeping one terminal-first workflow: prompts, tools, agents, MCP, slash commands, and streaming output.
OpenClaude is also mirrored to GitLawb: gitlawb.com/node/repos/z6MkqDnb/openclaude
Quick Start | Setup Guides | Providers | Source Build | VS Code Extension | Sponsors | Community
/providernpm install -g @gitlawb/openclaude
If the install later reports ripgrep not found, install ripgrep system-wide and confirm rg --version works in the same terminal before starting OpenClaude.
openclaude
Inside OpenClaude:
/provider for guided provider setup and saved profiles/onboard-github for GitHub Models onboardingmacOS / Linux:
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-your-key-here
export OPENAI_MODEL=gpt-4o
openclaude
Windows PowerShell:
$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_API_KEY="sk-your-key-here"
$env:OPENAI_MODEL="gpt-4o"
openclaude
macOS / Linux:
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=qwen2.5-coder:7b
openclaude
Windows PowerShell:
$env:CLAUDE_CODE_USE_OPENAI="1"
$env:OPENAI_BASE_URL="http://localhost:11434/v1"
$env:OPENAI_MODEL="qwen2.5-coder:7b"
openclaude
If you have Ollama installed, you can skip the env var setup entirely:
ollama launch openclaude --model qwen2.5-coder:7b
This automatically sets ANTHROPIC_BASE_URL, model routing, and auth so all API traffic goes through your local Ollama instance. Works with any model you have pulled — local or cloud.
Beginner-friendly guides:
Advanced and source-build guides:
| Provider | Setup Path | Notes |
| --- | --- | --- |
| OpenAI-compatible | /provider or env vars | Works with OpenAI, OpenRouter, DeepSeek, Groq, Mistral, LM Studio, and other compatible /v1 servers |
| Gemini | /provider or env vars | Supports API key, access token, or local ADC workflow on current main |
| GitHub Models | /onboard-github | Interactive onboarding with saved credentials |
| Codex OAuth | /provider | Opens ChatGPT sign-in in your browser and stores Codex credentials securely |
| Codex | /provider | Uses existing Codex CLI auth, OpenClaude secure storage, or env credentials |
| Ollama | /provider, env vars, or ollama launch | Local inference with no API key |
| Atomic Chat | /provider, env vars, or bun run dev:atomic-chat | Local Model Provider; auto-detects loaded models |
| Bedrock / Vertex / Foundry | env vars | Additional provider integrations for supported environments |
.openclaude-profile.json supportOpenClaude supports multiple providers, but behavior is not identical across all of them.
For best results, use models with strong tool/function calling support.
OpenClaude can route different agents to different models through settings-based routing. This is useful for cost optimization or splitting work by model strength.
Add to ~/.openclaude.json:
{
"agentModels": {
"deepseek-v4-flash": {
"base_url": "https://api.deepseek.com/v1",
"api_key": "sk-your-key"
},
"gpt-4o": {
"base_url": "https://api.openai.com/v1",
"api_key": "sk-your-key"
}
},
"agentRouting": {
"Explore": "deepseek-v4-flash",
"Plan": "gpt-4o",
"general-purpose": "gpt-4o",
"frontend-dev": "deepseek-v4-flash",
"default": "gpt-4o"
}
}
When no routing match is found, the global provider remains the fallback.
Note:
api_keyvalues insettings.jsonare stored in plaintext. Keep this file private and do not commit it to version control.
By default, WebSearch works on non-Anthropic models using DuckDuckGo. This gives GPT-4o, DeepSeek, Gemini, Ollama, and other OpenAI-compatible providers a free web search path out of the box.
Note: DuckDuckGo fallback works by scraping search results and may be rate-limited, blocked, or subject to DuckDuckGo's Terms of Service. If you want a more reliable supported option, configure Firecrawl.
For Anthropic-native backends and Codex responses, OpenClaude keeps the native provider web search behavior.
WebFetch works, but its basic HTTP plus HTML-to-markdown path can still fail on JavaScript-rendered sites or sites that block plain HTTP requests.
Set a Firecrawl API key if you want Firecrawl-powered search/fetch behavior:
export FIRECRAWL_API_KEY=your-key-here
With Firecrawl enabled:
WebSearch can use Firecrawl's search API while DuckDuckGo remains the default free path for non-Claude modelsWebFetch uses Firecrawl's scrape endpoint instead of raw HTTP, handling JS-rendered pages correctlyFree tier at firecrawl.dev includes 500 credits. The key is optional.
OpenClaude can be run as a headless gRPC service, allowing you to integrate its agentic capabilities (tools, bash, file editing) into other applications, CI/CD pipelines, or custom user interfaces. The server uses bidirectional streaming to send real-time text chunks, tool calls, and request permissions for sensitive commands.
Start the core engine as a gRPC service on localhost:50051:
npm run dev:grpc
| Variable | Default | Description |
|-----------|-------------|------------------------------------------------|
| GRPC_PORT | 50051 | Port the gRPC server listens on |
| GRPC_HOST | localhost | Bind address. Use 0.0.0.0 to expose on all interfaces (not recommended without authentication) |
We provide a lightweight CLI client that communicates exclusively over gRPC. It acts just like the main interactive CLI, rendering colors, streaming tokens, and prompting you for tool permissions (y/n) via the gRPC action_required event.
In a separate terminal, run:
npm run dev:grpc:cli
Note: The gRPC definitions are located in src/proto/openclaude.proto. You can use this file to generate clients in Python, Go, Rust, or any other language.
bun install
bun run build
node dist/cli.mjs
Helpful commands:
bun run devbun testbun run test:coverageNo comments yet. Be the first to share your thoughts!