A web UI coding agent that handles the full development loop: understand your codebase, plan features collaboratively, spin up isolated branch agents, write the code, and ship PRs — all in parallel. Written in Go.
# Add to your Claude Code skills
git clone https://github.com/prasenjeet-symon/ogcode
An agentic coding assistant with a web UI, written in Go.
Ogcode acts as a pair programmer that actually codes with you. It doesn't just suggest snippets; it understands your entire codebase, plans complex features, and executes them by creating branches and PRs automatically. It can even run multiple tasks in parallel across different branches, allowing you to ship entire features in a fraction of the time. Whether you're hunting a bug or building a zero-to-one feature, Ogcode handles the heavy lifting so you can focus on the architecture.
Ogcode gives you two ways to work with AI on your codebase:
No comments yet. Be the first to share your thoughts!

Ogcode's Agentic Session Memory revolutionizes how AI coding assistants handle context in long-running sessions. Instead of sending the entire conversation history to the LLM (which quickly becomes expensive and hits token limits), Ogcode intelligently extracts, stores, and retrieves only the relevant context needed for each query.
| Feature | Impact | |---------|--------| | ~70% Token Savings | Drastically reduced API costs on long sessions | | Infinite Context | No practical limit on session length or codebase size | | Higher Accuracy | Only relevant memories are retrieved per query |
export OGCODE_AGENTIC_MEMORY_MODE=true
| Session Length | Traditional | With Agentic Memory | Savings | |----------------|-------------|---------------------|---------| | 50 messages | ~25K tokens | ~8K tokens | 68% | | 200 messages | ~100K tokens | ~28K tokens | 72% | | 1000 messages | ~500K tokens | ~120K tokens | 76% |
Actual savings vary based on codebase complexity and conversation patterns.
Via Homebrew:
brew tap prasenjeet-symon/ogcode
brew install ogcode
Via curl:
curl -fsSL http://ogcode.xyz/install.sh | sh
The installer auto-detects your platform, downloads the latest release, and installs to /usr/local/bin (uses sudo if needed).
irm http://ogcode.xyz/install.ps1 | iex
This downloads the latest release, extracts it to %LOCALAPPDATA%\ogcode, and adds it to your PATH automatically.
Via winget (after next release):
winget install prasenjeet-symon.ogcode
Manual install:
ogcode_Windows_x86_64.zip (or _arm64.zip if you have an ARM device).C:\Tools\ogcode).Path environment variable:
Win + S, search for Edit environment variables for your accountPath and click EditC:\Tools\ogcode)ogcode version
go install github.com/prasenjeet-symon/ogcode@latest
docker run -p 8080:8080 -v $(pwd):/workspace -w /workspace ghcr.io/prasenjeet-symon/ogcode:latest
Ogcode auto-detects available AI providers based on environment variables. No config files are required.
Set at least one API key (or use Ollama):
| Variable | Provider |
|----------|----------|
| ANTHROPIC_API_KEY | Anthropic (Claude) |
| OPENAI_API_KEY | OpenAI (GPT) |
| OPENROUTER_API_KEY | OpenRouter |
| OLLAMA_API_KEY | Ollama Cloud (see below) |
| OLLAMA_BASE_URL | Ollama (local / cloud URL) |
Ogcode auto-detects Ollama on macOS/Linux if the binary is installed at a common path. On Windows, or if you have a non-standard install, set OLLAMA_BASE_URL explicitly.
Local setup (default):
# macOS / Linux — auto-detected if ollama is installed
ollama serve
ogcode
# Or be explicit on any OS:
export OLLAMA_BASE_URL=http://localhost:11434/v1
ogcode
On Windows (PowerShell):
$env:OLLAMA_BASE_URL = "http://localhost:11434/v1"
ogcode
Remote or Ollama Cloud:
export OLLAMA_BASE_URL=https://api.ollama.com/v1 # or your custom endpoint
export OLLAMA_API_KEY=your-api-key # required for cloud / authenticated endpoints
ogcode
On Windows (PowerShell):
$env:OLLAMA_BASE_URL = "https://api.ollama.com/v1"
$env:OLLAMA_API_KEY = "your-api-key"
ogcode
Set a default model:
export OLLAMA_MODEL=codellama # defaults to qwen3-coder-next if not set
Available models in the UI include: qwen3, codellama, llama3.1, deepseek-coder-v2, mistral, and others. Any model you have pulled in Ollama will work — just select it from the model dropdown in the web UI.
To give the agent long-term memory across sessions, set:
export OGCODE_AGENTIC_MEMORY_MODE=true
This connects to an MCP-compatible memory server (configure via MCP_SERVER_* env vars).
You can add custom models (e.g. fine-tuned endpoints) through the web UI at Settings → Models.
ogcode
Opens the web UI at http://localhost:8080. Chat with the agent, ask it to read files, write code, run commands, or search the codebase.
ogcode plan
Opens the planning interface. Describe what you want to build. The planning agent will understand your codebase and discuss the approach with you. When you are satisfied, click Lock Plan — the agent breaks it into tasks with dependencies, effort, and complexity estimates.
ogcode -p 3000
ogcode plan -p 3000
ogcode version
Plans are archived as markdown files in .ogcode/archives/ once all tasks are completed.
Join the Ogcode community on Discord for discussions, support, and updates:
MIT License — see LICENSE for details.