by HuangYuChuh
A powerful ComfyUI workflow skill for OpenClaw and other AI agents that support skills.
# Add to your Claude Code skills
git clone https://github.com/HuangYuChuh/ComfyUI_Skills_OpenClawname: comfyui-skill-openclaw description: | Generate images utilizing ComfyUI's powerful node-based workflow capabilities. Supports dynamically loading multiple pre-configured generation workflows from different instances and their corresponding parameter mappings, importing saved workflows in bulk from ComfyUI or local JSON files, converting natural language into parameters, driving local or remote ComfyUI services, tracking execution history with parameters and results, and ultimately returning the images to the target client.
As an OpenClaw Agent equipped with the ComfyUI skill, your objective is to translate the user's conversational requests into strict, structured parameters and hand them over to the underlying Python scripts to execute workflows across multi-server environments.
If the user asks you to open, launch, or bring up the local Web UI for this skill, run:
python3 ./ui/open_ui.py
This command will:
This skill is primarily a workflow execution client for a local or remote ComfyUI server.
The core native ComfyUI routes relevant to this skill are:
POST /prompt to submit a workflow runGET /history/{prompt_id} to poll for completionGET /view to download generated imagesOther native ComfyUI routes such as /ws, /queue, /interrupt, /upload/image, /object_info, and /system_stats exist upstream but are not required for the basic execution path implemented here.
For the route-level reference and the distinction between native ComfyUI routes and this repository's own manager API, see .

This project is a ComfyUI skill integration layer for OpenClaw, Codex, and Claude Code. It turns the workflows you build and export from ComfyUI in API format into callable skills that these agents can trigger with natural language.
It converts natural language requests into structured skill arguments, maps them to ComfyUI workflow inputs, submits jobs to ComfyUI, waits for completion, then pulls generated images back to local disk.
For the upstream ComfyUI local server routes that back this skill, see docs/comfyui-native-routes.md.
No comments yet. Be the first to share your thoughts!
The local manager API also exposes higher-level workflow execution and history routes:
POST /api/servers/{server_id}/workflow/{workflow_id}/runGET /api/servers/{server_id}/workflow/{workflow_id}/historyGET /api/servers/{server_id}/workflow/{workflow_id}/history/{run_id}Before running a workflow, check whether the target ComfyUI server is online.
You can query the manager API endpoint:
GET /api/servers/{server_id}/status
This returns JSON with "status": "online" or "status": "offline".
Recommended agent flow: Before Step 3 (Trigger Image Generation), run a server status check. If offline, ask the user to start ComfyUI and retry once it is online.
Use the manager UI/API when the user wants to register workflows into this skill instead of running them immediately.
/userdata, local files, manager API routes, and import result semantics, read references/workflow-import.md.If the user provides you with one new ComfyUI workflow JSON (API format) and asks you to "configure it" or "add it":
local../data/<server_id>/<new_workflow_id>/workflow.json.inputs inside node definitions, e.g., KSampler's seed, CLIPTextEncode's text for positive/negative prompts, EmptyLatentImage for width/height)../data/<server_id>/<new_workflow_id>/schema.json. The schema format must follow:
{
"workflow_id": "<new_workflow_id>",
"server_id": "<server_id>",
"description": "Auto-configured by OpenClaw",
"enabled": true,
"parameters": {
"prompt": { "node_id": "3", "field": "text", "required": true, "type": "string", "description": "Positive prompt" }
// Add other sensible parameters that the user might want to tweak
}
}
Before attempting to generate any image, you must first query the registry to understand which workflows are currently supported and enabled:
python ./scripts/registry.py list --agent
Return Format Parsing:
You will receive a JSON containing all available workflows. Notice they are uniquely identified by the combination of server_id and workflow_id (or path format <server_id>/<workflow_id>):
required: true, if the user hasn't provided them, you must ask the user to provide them.required: false, you can infer and generate them yourself based on the user's description (e.g., translating and optimizing the user's scene), or simply use empty values/random numbers (e.g., seed = random number).Once you have identified the workflow to use and collected/generated all necessary parameters, you need to assemble them into a compact JSON string.
For example, if the schema exposes prompt and seed, you need to construct:
{"prompt": "A beautiful landscape, high quality, masterpiece", "seed": 40128491}
If critical parameters are missing, politely ask the user using notify_user. For example: "To generate the image you need, would you like a specific person or animal? Do you have an expected visual style?"
Once the complete parameters are collected, execute the workflow client in a command-line environment (ensure your current working directory is the project root, or navigate to it first).
Pass the full identifier as <server_id>/<workflow_id>.
Note: Outer curly braces must be wrapped in single quotes to prevent bash from incorrectly parsing JSON double quotes.
python ./scripts/comfyui_client.py --workflow <server_id>/<workflow_id> --args '{"key1": "value1", "key2": 123}'
Blocking and Result Retrieval:
run_id, prompt_id, and an images list whose values are absolute local file paths.run_id together with error, which can be used to inspect the saved execution record through the manager UI/API.POST /prompt -> GET /history/{prompt_id} -> GET /view.The manager stores execution history per workflow, including raw args, resolved args, prompt ID, result files, status, timing, and error summary. History records live under data/<server_id>/<workflow_id>/history/.
Once you obtain the absolute local path to the generated image, use your native capabilities to present the file to the user (e.g., in an OpenClaw environment, returning the path allows the client to intercept it and convert it into rich text or an image preview).
registry.py and tell the user they need to first go to the Web UI panel to upload and configure the mapping for that workflow on the desired server.--args is a valid JSON string wrapped in single quotes.server/workflowscripts/update_frontend.sh to pull the latest buildui/static/ from the frontend release when git update is unavailable