ComfyUI Extension: neonllama
This custom ComfyUI node transforms a core idea into a richly detailed positive prompt using a local a/Ollama LLM.
Custom Nodes (0)
README
๐ง NeonLLama ComfyUI Extension
This custom ComfyUI node transforms a core idea into a richly detailed positive prompt using a local Ollama LLM. It also lets you specify "avoid" content, which is:
- Used by the AI to influence the generated prompt by avoiding certain topics.
- Returned unchanged as the negative prompt for Stable Diffusion or similar models.
๐ Features
- ๐ง Generates a vivid, descriptive positive prompt from an idea.
- โ Lets you define what the AI should avoid mentioning (used during generation).
- ๐ฏ Returns your avoid list directly as a negative prompt, unmodified.
- ๐งฎ Token-aware generation using
clip-vit-base-patch32
tokenizer. - ๐ Retries until prompt fits within your token limits.
- โ๏ธ Configurable parameters like token ranges, model, retry attempts, etc.
๐งฉ How It Works
- You input an idea (e.g.,
"cyberpunk alley in heavy rain"
). - You can add avoid terms (e.g.,
"blur, soft lighting, extra limbs"
). - The LLM uses both to generate a positive prompt:
- The idea is expanded into a structured visual prompt.
- The avoid terms are used to steer the generation away from unwanted content.
- The avoid list is also passed through untouched as the negative prompt.
๐ค Outputs
| Output | Type | Description |
|--------|------|-------------|
| prompt
| STRING
| The positive prompt, generated by the LLM. |
| avoid
| STRING
| The negative prompt, returned as provided. |
๐งช Example
Inputs:
idea
:haunted subway station with broken lights
avoid
:blood, gore, screaming
Outputs:
-
prompt
: (LLM-generated)
"dark abandoned subway, flickering fluorescent lights, cracked tiled walls, shadowy corners, old train cars, graffiti-covered pillars, dim green glow, debris scattered floor"
-
avoid
: (Unchanged)
"blood, gore, screaming"
Use the prompt
as your positive CLIP text, and avoid
for negative conditioning.
โ๏ธ Configuration Fields
| Name | Type | Description |
|------|------|-------------|
| model
| Dropdown | Select the Ollama model to use. |
| idea
| Multiline Text | The concept or image idea. |
| avoid
| Multiline Text | Words/themes to avoid (used by LLM + passed to negative prompt). |
| max_tokens
| Int | Maximum allowed tokens for the generated prompt. |
| min_tokens
| Int | Minimum token target. |
| max_attempts
| Int | Max retries to hit token range. |
| regen_on_each_use
| Bool | Force prompt regeneration every time node runs. |
๐ Regeneration Behavior
If enabled, regen_on_each_use
will force the node to re-generate the prompt every time it's executed, ensuring fresh output even if the inputs donโt change.
๐ License
MIT License