by heymrun
Self-hosted AI workflow automation platform with visual canvas, agents, RAG, HITL, MCP, and observability in one runtime.
# Add to your Claude Code skills
git clone https://github.com/heymrun/heymHeym is an AI-native automation platform built from the ground up around LLMs, agents, and intelligent tooling. Wire together AI agents, vector stores, web scrapers, HTTP calls, and message queues on a visual canvas — then deploy instantly via Docker.
Unlike platforms that started as classic trigger-action automation and layered AI on later, in Heym AI is the execution model.
Explore the product site at heym.run.
Many automation platforms turn essential production features into upgrade pressure: global variables, execution history and search, insights, AI Builder / Motherboard capabilities, observability, audit-style logs, team controls, scaling, or customer-facing portals.
No comments yet. Be the first to share your thoughts!
Heym takes the opposite position. These are core workflow primitives, not enterprise bait. They ship in the free self-hostable product because serious AI automation should be inspectable, shareable, observable, and deployable from day one without any kind of weird production run limits.
Our enterprise offering is for commercial licensing, deployment help, dedicated support, and additional security layers. It is not a strategy for hiding core workflow and AI-native capabilities behind a sales call, now or later.
The demos below illustrate an agent–subagent layout instead of a purely step-by-step, single-thread agent chain. For a request like “How do I get from Berlin to Frankfurt?” and “What should I eat there?”, subagents can work on those parts in parallel. That tends to finish faster, keeps each model turn focused (less context bloat), and avoids pressuring one model to produce two large, unrelated answers in a single reply.
You can still answer with two separate LLM calls (one per question) or run several calls in sequence and merge the results in a final step—those patterns work—but for this kind of multi-part ask they are usually slower than parallel subagents behind an orchestrator.
Describe the agents, orchestration pattern, and user-facing result you want; Heym builds the workflow on the canvas.

Example prompt
Create a workflow for me that includes a Roadmap Agent and a Best Food Agent. When the Orchestrator Agent receives a request, it will invoke these subagents in parallel and return the result to the user.
Execute the workflow directly from the canvas and inspect each step as results move through the graph.

Create agent skills from natural language, preview the generated SKILL.md, and attach them to the agent.

Example prompt
Create a skill for me and add it to the agent. The Orchestrator Agent will call this skill after receiving information from the subagents, and the skill will create a simple execution plan explaining what can actually be done in the destination city.
Turn a workflow into a chat experience so users can invoke the orchestration with a natural request.

Example prompt
I live in Berlin and am planning to go to Frankfurt. How many kilometers is it on the Autobahn? Also, where can I find the best doner in Frankfurt?
SKILL.md and Python file previews/chat/{slug} with streaming responses and file uploads/execute or /execute/stream, with per-node start messages and live node event output in the terminalFor a complete list of all features with short descriptions, see Full Feature Set. It covers Getting Started, every node type, reference topics (Expression DSL, workflow structure, webhooks, SSE streaming, AI Assistant, Chat with Docs, Portal, security, etc.), and all dashboard tabs (Workflows, Templates, Variables, Chat, Credentials, Vectorstores, MCP, Traces, Analytics, Evals, Teams, Logs and more).
| Capability | Heym | n8n | Zapier | Make.com | |---|:---:|:---:|:---:|:---:| | Built-in LLM node | ✅ | ✅ | ✅ | ✅ | | LLM Batch API + status branches | ✅ | partial¹⁵ | ❌¹⁵ | partial¹⁵ | | AI Agent node (tool calling) | ✅ | ✅ | ✅ | ✅ | | Agent persistent memory (kno