A Claude Code skill that 10x's your effective context window by dispatching tasks to background AI workers.
# Add to your Claude Code skills
git clone https://github.com/bassimeledath/dispatchDispatch 10x's Claude Code's effective context window size. Instead of filling your session with implementation, /dispatch turns it into a lightweight orchestrator — work fans out to background agents, each with their own full context window.
Without dispatch: You ask Claude to review code, refactor a module, write tests, and update docs. Each task fills the context window. By task 3, Claude is losing track. By task 5, you're starting a new session.
With dispatch: You describe all 5 tasks. Workers execute in parallel with fresh contexts. Your main session stays lean. Questions surface to you when needed — no polling, no context lost.
/dispatch use sonnet to find better design patterns for the auth module
Dispatch inverts the usual model: the main session becomes a mediator, not the thinker. It writes a checklist and hands it off. The actual implementation — reading code, reasoning about edge cases, writing tests — happens in fresh worker contexts that each get their own full window. Your main session's context is preserved for orchestration.
With claude --background, multiple terminals, or fire-and-forget agent runners — you are the orchestrator. You track what's running, check on progress, notice failures, and context-switch between outputs.
No comments yet. Be the first to share your thoughts!
With dispatch, the AI dispatcher tracks all workers, surfaces questions, reports completions, handles errors, and offers recovery. Your job reduces to: (a) describe what you want, (b) answer questions when asked.
When a /dispatch worker gets stuck, it doesn't silently fail or hallucinate. It asks a clarifying question — the dispatcher surfaces it to you, you answer, and the worker continues without losing context. No restart, no re-explaining, no lost work.
Worker is asking: "requirements.txt doesn't exist. What feature should I implement?"
> Add a /health endpoint that returns JSON with uptime and version.
Answer sent. Worker is continuing.
The moment a worker is dispatched, your session is immediately free. Dispatch another task. Ask a question. Write code. Workers run in parallel and results arrive as they complete.
Mix models per task. Claude for deep reasoning, GPT for broad generation, Gemini for speed. Reference any model by name — if it's not in your config, ...