by MedChaouch
Multi-LLM orchestration framework
# Add to your Claude Code skills
git clone https://github.com/MedChaouch/Puzld.aiBeyond CLI wrappers. PuzldAI is a complete AI orchestration framework — route tasks, explore codebases, execute file edits, build memory, and generate training data.
PuzldAI is a terminal-native framework for orchestrating multiple AI agents. Route tasks to the best agent, compare responses, chain agents in pipelines, or let them collaborate. Agentic Mode gives LLMs tools to explore your codebase (view, glob, grep, bash) and propose file edits with permission prompts — like Claude Code, but for any LLM. Memory/RAG stores decisions and code for future context. Observation Layer logs everything for DPO fine-tuning. One framework that grows with your AI workflow.
| Problem | Solution | |---------|----------| | Claude is great at code, Gemini at research | Auto-routing picks the best agent | | Need specific model versions | Model selection — pick sonnet, opus, haiku, etc. | | Want multiple opinions | Compare mode runs all agents in parallel | | Complex tasks need multiple steps | Pipelines chain agents together | | Repetitive workflows | Workflows save and reuse pipelines | | Need agents to review each other | Collaboration — correct, debate, consensus | | Want LLM to explore & edit files safely | Agentic mode — tools, permission prompts, apply | | Context gets lost between sessions | Memory/RAG — semantic retrieval of past decisions | | Need data to fine-tune models | Observations — export DPO training pairs | | Need AI to understand your codebase | Indexing — AST parsing, semantic search, AGENTS.md |
No comments yet. Be the first to share your thoughts!
gemini:analyze → claude:code (CLI)| Agent | Source | Requirement | Agentic Mode | |-------|--------|-------------|--------------| | Claude | Anthropic | Claude CLI | ✅ Full support | | Gemini | Google | Gemini CLI | ⚠️ Auto-reads files | | Codex | OpenAI | Codex CLI | ⚠️ Auto-reads files | | Ollama | Local | Ollama running | ✅ Full support | | Mistral | Mistral AI | Vibe CLI | ⚠️ Inconsistent |
Note: Some CLIs (Gemini, Codex) have built-in file reading that bypasses permission prompts. Claude and Ollama respect the permission system fully.
npm install -g puzldai
Or try without installing:
npx puzldai
Update:
npm update -g puzldai
# Interactive TUI
puzldai
# Single task
puzldai run "explain recursion"
# Compare agents
puzldai compare claude,gemini "best error handling practices"
# Pipeline: analyze → code → review
puzldai run "build a logger" -P "gemini:analyze,claude:code,gemini:review"
# Multi-agent collaboration
puzldai correct "write a sort function" --producer claude --reviewer gemini
puzldai debate "microservices vs monolith" -a claude,gemini
puzldai consensus "best database choice" -a claude,gemini,ollama
# Check what's available
puzldai check
puzldalso works as a shorter alias.
| Mode | Pattern | Use Case | Category | |------|---------|----------|----------| | Single | One agent processes task | Quick questions, simple tasks | Basic | | Compare | Same task → multiple agents in parallel | See different perspectives | Parallel | | Pipeline | Agent A → Agent B → Agent C | Multi-step processing | Sequencing | | Workflow | Saved pipeline, reusable | Repeatable workflows | Sequencing | | Autopilot | LLM generates plan → executes | Complex tasks, unknown steps | AI Planning | | Correct | Producer → Reviewer → Fix | Quality assurance, code review | Collaboration | | Debate | Agents argue in rounds, optional moderator | Find flaws in reasoning | Collaboration | | Consensus | Propose → Vote → Synthesize | High-confidence answers | Collaboration | | Agentic | LLM explores → Tools → Permission prompts → Apply | Codebase exploration + file edits | Execution | | Plan | LLM analyzes task → Describes approach | Planning before implementation | Execution | | Build | LLM explores + edits with full tool access | Direct implementation with tools | Execution |
| Mode | Option | Type | Default | Description |
|------|--------|------|---------|-------------|
| Single | agent | AgentName | auto | Which agent to use |
| | model | string | — | Override model (e.g., sonnet, opus) |
| Compare | agents | AgentName[] | — | Agents to compare (min 2) |
| | sequential | boolean | false | Run one-at-a-time vs parallel |
| | pick | boolean | false | LLM selects best response |
| Pipeline | steps | PipelineStep[] | — | Sequence of agent:action |
| | interactive | boolean | false | Confirm between steps |
| Workflow | name | string | — | Workflow to load |
| | interactive | boolean | false | Confirm between steps |
| Autopilot | planner | AgentName | ollama | Agent that generates plan |
| | execute | boolean | false | Auto-run generated plan |
| Correct | producer | AgentName | auto | Agent that creates output |
| | reviewer | AgentName | auto | Agent that critiques |
| | fixAfterReview | boolean | false | Producer fixes based on review |
| Debate | agents | AgentName[] | — | Debating agents (min 2) |
| | rounds | number | 2 | Number of debate rounds |
| | moderator | AgentName | none | Synthesizes final conclusion |
| Consensus | agents | AgentName[] | — | Participating agents (min 2) |
| | maxRounds | number | 2 | Voting rounds |
| | synthesizer | AgentName | auto | Creates final output |
| Agentic | agent | AgentName | claude | Agent to use for exploration |
| | tools | string[] | all | Available tools (view, glob, grep, bash, write, edit) |
| Plan | agent | AgentName | claude | Agent to analyze task |
| Build | agent | AgentName | claude | Agent to implement |
Pick specific models for each agent. Aliases like sonnet, opus, haiku always point to the latest version. Specific versions like claude-sonnet-4-20250514 are pinned.
# TUI
/model # Open model selection panel
# CLI
puzldai model show # Show current models for all agents
puzldai model list # List all available models
puzldai model list claude # List models for specific agent
puzldai model set claude opus # Set model for an agent
puzldai model clear claude # Reset to CLI default
# Per-task override
puzldai run "task" -m opus # Override model for this run
puzldai agent -a claude -m haiku # Interactive mode with specific model
Run the same prompt on multiple agents and compare results side-by-side.
Three views: side-by-side, expanded, or stacked.
# TUI
/compare claude,gemini "explain async/await"
/sequential # Toggle: run one-at-a-time
/pick # Toggle: select best response
# CLI
puzldai compare "task" # Default: claude,gemini
puzldai compare "task" -a claude,gemini,codex # Specify agents
puzldai compare "task" -s # Sequential mode
puzldai compare "task" -p # Pick best response