by iconben
A Cli, a webUI, and a MCP server for the Z-Image-Turbo text-to-image generation model (Tongyi-MAI/Z-Image-Turbo base model as well as quantized models)
# Add to your Claude Code skills
git clone https://github.com/iconben/z-image-studioGuides for using mcp servers skills like z-image-studio.
A Cli, a webUI, and a MCP server for the Z-Image-Turbo text-to-image generation model (Tongyi-MAI/Z-Image-Turbo and its variants).
This tool is designed to run efficiently on local machines for Windows/Mac/Linux users. It features specific optimizations for NVIDIA (CUDA), AMD on Linux (ROCm), Intel (XPU), and Apple Silicon (MPS), falling back to CPU if no compatible GPU is detected.

Hybrid Interfaces:
No comments yet. Be the first to share your thoughts!
Top skills in this category by stars
Tongyi-MAI/Z-Image-Turbo model and quatized variants via diffusers.--lora entries with optional strengths.zimg mcp, zimg-mcp) for local agents, SSE available at /mcp-sse, and MCP 2025-03-26 Streamable HTTP transport at /mcp./mcp) first for optimal performance, falling back to SSE (/mcp-sse) if needed.uv (recommended for dependency management)Python 3.12+ Note: torch.compile is disabled by default for Python 3.12+ due to known compatibility issues with the Z-Image model architecture. If you want to experiment with torch.compile on Python 3.12+, set ZIMAGE_ENABLE_TORCH_COMPILE=1 via environment variable or in ~/.z-image-studio/config.json (experimental, may cause errors).
Note: AMD GPU support currently requires ROCm, which is only available for Linux PyTorch builds. Windows users with AMD GPUs will currently fall back to CPU.
pip install torch --index-url https://download.pytorch.org/whl/rocm6.1 or similar). Ensure the PyTorch ROCm version matches your installed driver version.zimg models.torch.version.hip is detected.HSA_OVERRIDE_GFX_VERSION (e.g., 10.3.0 for RDNA2, 11.0.0 for RDNA3).torch.compile is disabled by default on ROCm due to experimental support. You can force-enable it with ZIMAGE_ENABLE_TORCH_COMPILE=1 if your setup (Triton/ROCm version) supports it.zimg models.If you just want the zimg CLI to be available from anywhere, install it as a uv tool:
uv tool install git+https://github.com/iconben/z-image-studio.git
# or, if you have the repo cloned locally:
# git clone https://github.com/iconben/z-image-studio.git
# cd z-image-studio
# uv tool install .
After this, the zimg command is available globally:
zimg --help
To update z-image-studio:
uv tool upgrade z-image-studio
# or, if you have the repo cloned locally, you pull the latest source code:
# git pull
For Windows users, a pre-built installer is available that bundles everything you need:
Z-Image-Studio-Windows-x64-x.x.x.exeC:\Program Files\Z-Image Studio%LOCALAPPDATA%\z-image-studio (contains database, LoRAs, and outputs)Run Z-Image Studio in a container with Docker:
# Create persistent volume
docker volume create zimg-data
# Run the container
docker run -d \
--name z-image-studio \
-p 8000:8000 \
-v zimg-data:/data \
-v zimg-config:/home/appuser/.z-image-studio \
-v zimg-outputs:/data/outputs \
iconben/z-image-studio:latest
Then open http://localhost:8000 in your browser.
Create a docker-compose.yml file:
services:
z-image-studio:
image: iconben/z-image-studio:latest
container_name: z-image-studio
ports:
- "8000:8000"
volumes:
- zimg-data:/data
- zimg-config:/home/appuser/.z-image-studio
- zimg-outputs:/data/outputs
restart: unless-stopped
volumes:
zimg-data:
zimg-config:
zimg-outputs:
Then run:
docker compose up -d
NVIDIA GPU:
services:
z-image-studio:
image: iconben/z-image-studio:latest
container_name: z-image-studio
ports:
- "8000:8000"
volumes:
- zimg-data:/data
- zimg-config:/home/appuser/.z-image-studio
- zimg-outputs:/data/outputs
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
restart: unless-stopped
volumes:
zimg-data:
zimg-config:
zimg-outputs:
AMD GPU (Linux):
services:
z-image-studio:
image: iconben/z-image-studio:latest
container_name: z-image-studio
ports:
- "8000:8000"
volumes:
- zimg-data:/data
- zimg-config:/home/appuser/.z-image-studio
- zimg-outputs:/data/outputs
devices:
- /dev/dri:/dev/dri
restart: unless-stopped
volumes:
zimg-data:
zimg-config:
zimg-outputs:
Then run:
docker compose up -d
Basic:
docker run -d \
--name z-image-studio \
-p 8000:8000 \
-v zimg-data:/data \
-v zimg-config:/home/appuser/.z-image-studio \
-v zimg-outputs:/data/outputs \
iconben/z-image-studio:latest
NVIDIA GPU:
docker run -d \
--n