Z Image Turbo – a local model outputting 4 megapixels

by
0 comments
Z Image Turbo – a local model outputting 4 megapixels

z image turbo Bringing back something that was missing with some of the newer models like Flux.2 Dave, WAN 2.2, Quen Image…the ability to run it locally! Of course there is a quantified version available in some of these other models, but it is so distilled that your results will not be as good as the original, but it will meet your needs for running it locally.

However, z image turbo The model fits and runs very well on my RTX4080 with only 16GB of VRAM. It is a distilled version of Z-Image which is a powerful and efficient image generation model with 6B parameters, developed by Alibaba’s Tongyi Lab. It should also run on most consumer grade GPUs under 16GB. The model was posted online Hugging Face website,

The model has since been ported to ComfyUI and the files are available online direct download,

Model download and storage location

📂 ComfyUI/
├── 📂 models/
│   ├── 📂 diffusion_models/
│   │   └── z_image_turbo_bf16.safetensors
│   ├── 📂 vae/
│   │   └── ae.safetensors
│   └── 📂 text_encoders/
│       ├── qwen_3_4b.safetensors

Once you have the models organized you can download the workflow I created using sub-graphs which greatly simplifies the workflow. I cover how to create and use sub-graphs in ComfyUI in my Youtube video,

Download Workflow

WWAA-Z-Image-Turbo.zip
3 downloads

download zip

Workflow preview only

The workflow will help you get started using Z Image Turbo.

If you would like to support our site please consider buying us ko-fi, take a product Or subscribeNeed a fast GPU, get access to the fastest GPU for less than $1 per hour RunPod.io

Related Articles

Leave a Comment