by hardness1020
A workflow-driven AI agent framework that executes YAML-defined decision trees. Bridging rigid automation and unpredictable agents
# Add to your Claude Code skills
git clone https://github.com/hardness1020/LeewayYou want to automate the same multi-step task. A skill or system prompt can guide the agent, but it can't pin the order: every run reads different files, uses different tools, reaches different conclusions. Nothing you can repeat, nothing you can audit.
Leeway enforces the graph. You write a YAML decision tree; the same nodes run in the same order every time. Each node picks its own tools and runs a full agent loop: the LLM iterates and emits a workflow_signal when done. The graph transitions on those signals.
| | Who drives the flow? | What's in a node? | Best for | |---|---|---|---| | AutoGPT, OpenClaw | LLM | Whatever the LLM decides | Exploratory tasks | | n8n | Graph | Any kind of node (API call, transform, AI Agent subflow) | Connecting SaaS APIs (Slack, Stripe, Airtable) | | Leeway | Graph (decisions) | A full agent loop with local-dev tools | Personal workflows and custom engineering pipelines that plug into your own system (files, shell, codebase) |
n8n is incredible for connecting SaaS APIs. Leeway is built specifically for personal workflows and custom engineering pipelines that integrate directly into your own system: your files, your shell, your repo, not third-party webhooks.
Pick Leeway when the task runs on your own files or shell, needs to be repeatable, and needs a model that can reason inside each step.
Five things that are hard to get from a node-graph workflow tool:
| # | Feature | What it does |
|---|---------|--------------|
| 1 | Agent loop per node | Each node is a full agent loop. The model can call read_file, grep, , iterate up to , and emit a when done. You decide the graph; the model decides the steps within each node. |
| 2 | | Every node gets its own , , , and MCP set, merged from globals and the node's allowlist. Node A can have + ; node B can have + ; same workflow. |
| 3 | | returns plus a file index. Reference files load only when the LLM explicitly asks. Combined with per-node scoping, each node only sees its allowlisted skills, and only their top-level content until the model drills in. |
| 4 | | For signal-based nodes, the engine tells the LLM how many turns it has and injects an at 2 turns remaining, listing the exact signals to call. No silent cost runaway. |
| 5 | | When context fills, Leeway first clears stale tool-result bodies in place. If that's not enough, it summarizes older messages via LLM while preserving the last 6. Fully transparent: no manual , no lost context mid-workflow. |
No comments yet. Be the first to share your thoughts!
bashmax_turnsworkflow_signalToolRegistrySkillRegistryHookRegistrybashglobweb_fetchmcp_github_searchskill(name="code-review")SKILL.md/compactAt every branching node, the model picks which path the workflow should take next. The obvious concern: what if it picks a path that doesn't exist? Leeway catches this at the runtime layer, not via prompt discipline:
assess node might allow only needs_investigation or well_documented; the same label on a different node can mean something completely different.This does not stop the model from confidently picking a legal but wrong option. What the graph gives you in that case is bounded damage: the wrong branch still runs inside its own restricted tool set and validated inputs, and the whole path is auditable after the fact.
See docs/workflows.md for the full mechanics.
# Clone and install
git clone https://github.com/your-org/Leeway.git
cd Leeway
uv sync --extra dev
# Set your API key
export ANTHROPIC_API_KEY=sk-...
# Launch interactive mode
uv run leeway
# Or run a single prompt
uv run leeway -p "explain this codebase"
# Use different models
uv run leeway --model claude-opus-4-6
# Use OpenAI-compatible provider
uv run leeway --api-format openai --base-url https://api.openai.com/v1
# Health check on any codebase. No input needed, low token usage
uv run leeway
> /code-health start
Leeway's core agent loop, tool registry, permission system, and hook lifecycle are inspired by Claude Code's architecture: a minimal, streaming-first loop where the model drives tool use and the host enforces safety around it. Leeway reimplements that design in Python and extends it with a YAML workflow layer, parallel branches, cron scheduling, and per-node scoping.
flowchart LR
U[User Prompt] --> C[CLI / React TUI]
C --> R[RuntimeBundle]
R --> Q[QueryEngine]
Q --> A[Anthropic / OpenAI API]
A -->|tool_use| T[Tool Registry, 21+ tools]
T --> P[Permissions + Hooks]
P --> X["Files | Shell | Web | MCP | Tasks | Cron"]
X --> Q
The human defines the graph. The AI operates within each node. Deterministic transitions connect them. Parallel branches run concurrently with per-branch scoping and human-in-the-loop approval gates.
See .leeway/workflows/code-health.yaml. It covers all five patterns (linear, branch, loop, terminal, parallel) in one workflow with skills, hooks, and approval gates.
> /workflows
See docs/workflows.md for the full pattern catalog and every property table.
| Topic | Docs | |---|---| | Writing workflows (patterns, properties, transitions) | docs/workflows.md | | Built-in tools and custom tool authoring | docs/tools.md | | Skills with progressive disclosure | docs/skills.md | | Hooks (command + HTTP lifecycle callbacks) | docs/hooks.md | | MCP server integration | docs/mcp.md | | Plugins (distributable bundles) | docs/plugins.md | | Scheduling, cron, and remote triggers | docs/scheduling.md | | Permission modes and path rules | docs/permissions.md | | Slash command reference | docs/commands.md |
MIT