ComfyUI Extension: ComfyUI-PromptGenerator
Generate Stable Diffusion prompts using Qwen3-8B via Ollama with 7 style presets (cinematic, anime, photorealistic, fantasy, abstract, cyberpunk, sci-fi)
Custom Nodes (0)
README
🎨 ComfyUI Prompt Generator
Generate detailed Stable Diffusion prompts using Qwen3-8B via Ollama, directly within ComfyUI.

Features
- 7 Style Presets: Cinematic, Anime, Photorealistic, Fantasy, Abstract, Cyberpunk, Sci-Fi
- Temperature & Top-P Controls: Fine-tune generation creativity
- Focus Area (Emphasis): Direct the prompt to emphasize specific aspects
- Mood/Atmosphere: Set the emotional tone of the generated prompt
- Reasoning Toggle: Show or hide the model's thinking process
Prerequisites
-
Ollama installed and running: https://ollama.ai
-
Qwen3-8B model pulled:
ollama pull qwen3:8b
Installation
Option 1: Clone to custom_nodes (Recommended)
cd /path/to/ComfyUI/custom_nodes
git clone https://github.com/Limbicnation/ComfyUI-PromptGenerator.git
cd ComfyUI-PromptGenerator
pip install -r requirements.txt
Option 2: ComfyUI Manager
Search for "Prompt Generator" in the ComfyUI Manager and install.
Note: If you see "With the current security level configuration, only custom nodes from the 'default channel' can be installed", temporarily set security_level = weak in your ComfyUI Manager's config.ini file, then restore it to normal after installation. See PUBLISHING.md for details.
Usage
- Restart ComfyUI after installation
- Right-click → Add Node →
text/generation→ 🎨 Prompt Generator (Qwen) - Connect the output
promptto your text encoder or save node
Node Inputs
| Input | Type | Description |
|-------|------|-------------|
| description | STRING | Your image concept (required) |
| style | COMBO | Style preset (cinematic, anime, etc.) |
| emphasis | STRING | Focus area (optional) |
| mood | STRING | Atmosphere/mood (optional) |
| temperature | FLOAT | Creativity (0.1-1.0, default: 0.7) |
| top_p | FLOAT | Sampling threshold (0.1-1.0, default: 0.9) |
| include_reasoning | BOOLEAN | Show model's thinking process |
| model | STRING | Ollama model (default: qwen3:8b) |
Example Workflow
Download the example workflow: workflow/workflow.png

Customizing Styles
Edit config/templates.yaml to add or modify style templates. Templates use Jinja2 syntax:
my_custom_style:
name: "My Style"
description: "Description for UI"
template: |
Write a prompt for: {{ description }}
{% if emphasis %}Focus on: {{ emphasis }}{% endif %}
{% if mood %}Mood: {{ mood }}{% endif %}
Troubleshooting
"Ollama not found"
- Ensure Ollama is installed and the
ollamacommand is in your PATH - Start the Ollama server:
ollama serve
"Model not found"
- Pull the model:
ollama pull qwen3:8b
Temperature/Top-P not working
- Install the Ollama Python package:
pip install ollama
License
Apache 2.0 - See LICENSE for details.
Related Projects
- Prompt Generator (Gradio App) - Standalone Gradio web UI for prompt generation
Credits
This ComfyUI node is based on the prompt-gen project.