ComfyUI Extension: ComfyUI-Z-Engineer
Custom node for ComfyUI that integrates a local LLM via OpenAI-compatible API to engineer optimal prompts for Z-Image Turbo workflows. (Description by CC)
Custom Nodes (0)
README
ComfyUI Z-Engineer
A custom node for ComfyUI that integrates a local LLM (via OpenAI-compatible API) to engineer optimal prompts for Z-Image Turbo workflows.
Features
- Local LLM Integration: Connects to your local LLM server (e.g., LM Studio, Oobabooga, vLLM) via standard OpenAI API format.
- System Prompting: Dedicated system prompt input to guide the LLM's behavior.
- Z-Image Optimization: Designed to work with models like Qwen3-4b-Z-Image-Engineer.
Installation
- Clone this repository into your ComfyUI
custom_nodesdirectory:cd ComfyUI/custom_nodes git clone https://github.com/BennyDaBall/ComfyUI-Z-Engineer.git - Restart ComfyUI.
Usage
- Find the node under Z-Engineer > Z-Engineer.
- Connect the output
promptto your CLIP Text Encode node. - Configuration:
- Input Prompt: Your raw idea or base prompt.
- System Prompt: Instructions for the LLM (System prompt supplied in HF repo!).
- API URL: Your local LLM endpoint (default:
http://localhost:1234/v1). - Model: The model name string (default:
local-model). - Seed/Temperature: Standard LLM generation parameters.
Recommended Model
For best results, use the Z-Image Engineer model: BennyDaBall/qwen3-4b-Z-Image-Engineer
Requirements
- A running local LLM server compatible with OpenAI's
chat/completionsendpoint. - ComfyUI installed.