by mikeyobrien
An improved implementation of the Ralph Wiggum technique for autonomous AI agent orchestration
# Add to your Claude Code skills
git clone https://github.com/mikeyobrien/ralph-orchestratorA hat-based orchestration framework that keeps AI agents in a loop until the task is done.
"Me fail English? That's unpossible!" - Ralph Wiggum
Documentation | Getting Started | Presets
npm install -g @ralph-orchestrator/ralph-cli
curl --proto '=https' --tlsv1.2 -LsSf \
https://github.com/mikeyobrien/ralph-orchestrator/releases/latest/download/ralph-cli-installer.sh | sh
No comments yet. Be the first to share your thoughts!
cargo install ralph-cli
Homebrew is not currently published from this repository's automated release flow. Prefer npm, Cargo, or the GitHub Releases installer.
# 1. Initialize Ralph with your preferred backend
ralph init --backend claude
# 2. Plan your feature (interactive PDD session)
ralph plan "Add user authentication with JWT"
# Creates: specs/user-authentication/requirements.md, design.md, implementation-plan.md
# 3. Implement the feature
ralph run -p "Implement the feature in specs/user-authentication/"
Ralph iterates until it outputs LOOP_COMPLETE or hits the iteration limit.
For simpler tasks, skip planning and run directly:
ralph run -p "Add input validation to the /users endpoint"
Alpha: The web dashboard is under active development. Expect rough edges and breaking changes.
Ralph includes a web dashboard for monitoring and managing orchestration loops.
ralph web # starts Rust RPC API + frontend + opens browser
ralph web --no-open # skip browser auto-open
ralph web --backend-port 4000 # custom RPC API port
ralph web --frontend-port 8080 # custom frontend port
ralph web --legacy-node-api # opt into deprecated Node tRPC backend
ralph mcp serve is scoped to a single workspace root per server instance.
ralph mcp serve --workspace-root /path/to/repo
Precedence is:
--workspace-rootRALPH_API_WORKSPACE_ROOTFor multi-repo use, run one MCP server instance per repo/workspace. Ralph's current control-plane APIs persist config, tasks, loops, planning sessions, and collections under a single workspace root, so server-per-workspace is the deterministic model.
Requirements:
ralph-api)On first run, ralph web auto-detects missing node_modules and runs npm install.
To set up Node.js:
# Option 1: nvm (recommended)
nvm install # reads .nvmrc
# Option 2: direct install
# https://nodejs.org/
For development:
npm install # install frontend + legacy backend deps
npm run dev:api # Rust RPC API (port 3000)
npm run dev:web # frontend (port 5173)
npm run dev # frontend only (default)
npm run dev:legacy-server # deprecated Node backend (optional)
npm run test # all frontend/backend workspace tests
Ralph can run as an MCP server over stdio for MCP-compatible clients:
ralph mcp serve
Use this mode from an MCP client configuration rather than an interactive terminal workflow.
Ralph implements the Ralph Wiggum technique — autonomous task completion through continuous iteration. It supports:
code-assist, debug, research, review, and pdd-to-code-assist, with more patterns documented as examplesRalph supports human interaction during orchestration via Telegram. Agents can ask questions and block until answered; humans can send proactive guidance at any time.
Quick onboarding (Telegram):
ralph bot onboard --telegram # guided setup (token + chat id)
ralph bot status # verify config
ralph bot test # send a test message
ralph run -c ralph.bot.yml -p "Help the human"
# ralph.yml
RObot:
enabled: true
telegram:
bot_token: "your-token" # Or RALPH_TELEGRAM_BOT_TOKEN env var
human.interact events; the loop blocks until a response arrives or times out@loop-id prefix, or default to primary/status, /tasks, /restart for real-time loop visibilitySee the Telegram guide for setup instructions.
Full documentation is available at mikeyobrien.github.io/ralph-orchestrator:
Contributions are welcome! See CONTRIBUTING.md for guidelines and CODE_OF_CONDUCT.md for community standards.
MIT License — See LICENSE for details.
Join the ralph-orchestrator community to discuss AI agent patterns, get help with your implementation, or contribute to the roadmap.
"I'm learnding!" - Ralph Wiggum