ComfyUI Extension: ComfyUI-Prompt_Helper
ComfyUI node that loads Qwen3-4B locally and expands short prompts into detailed, image-generation-friendly descriptions via GGUF. (Description by CC)
Custom Nodes (0)
README
ComfyUI Prompt Helper - Qwen3 Engineer
A simple ComfyUI custom node that loads the local GGUF version of Qwen3-4B-Z-Image-Engineer and expands short inputs into Z-Image Turbo–friendly positive prompts. Model card: https://huggingface.co/BennyDaBall/qwen3-4b-Z-Image-Engineer
Features
- Runs the GGUF text encoder locally via
llama-cpp-python. - Bundles the official system prompt focusing on positive constraints, texture detail, and camera settings.
- Outputs one enhanced prompt string to chain into your workflow.
Installation
- Place this repo in
ComfyUI/custom_nodes/ComfyUI-Prompt_Helper/. - Install dependencies:
pip install -r requirements.txt. - Download the GGUF model file and put it under
ComfyUI/models/text_encoders/(or a subfolder).
Usage
- Restart ComfyUI; the node category is
QwenTextEngineer. - Pick
gguf_name(auto-scans.ggufinmodels/text_encoders). system_promptis prefilled; adjust if needed. Enter your short description inprompt.- Run; it returns a rich positive prompt ready for downstream nodes.
Workflow

Parameters
n_ctx: context length (default 4096).n_gpu_layers: -1 loads all layers to GPU when possible.max_new_tokens: generation length cap.temperature: sampling temperature (0–1).
FAQ
- If
llama-cpp-pythonis missing, install dependencies then restart. - If no GGUF file is found, ensure it exists under
models/text_encoders.
License
Apache-2.0 as per the upstream model.