ComfyUI Extension: neonllama

Authored by NeonLightning

Created

Updated

1 stars

This custom ComfyUI node transforms a core idea into a richly detailed positive prompt using a local a/Ollama LLM.

Custom Nodes (0)

    README

    ๐Ÿง  NeonLLama ComfyUI Extension

    This custom ComfyUI node transforms a core idea into a richly detailed positive prompt using a local Ollama LLM. It also lets you specify "avoid" content, which is:

    • Used by the AI to influence the generated prompt by avoiding certain topics.
    • Returned unchanged as the negative prompt for Stable Diffusion or similar models.

    ๐Ÿš€ Features

    • ๐Ÿง  Generates a vivid, descriptive positive prompt from an idea.
    • โ›” Lets you define what the AI should avoid mentioning (used during generation).
    • ๐ŸŽฏ Returns your avoid list directly as a negative prompt, unmodified.
    • ๐Ÿงฎ Token-aware generation using clip-vit-base-patch32 tokenizer.
    • ๐Ÿ” Retries until prompt fits within your token limits.
    • โš™๏ธ Configurable parameters like token ranges, model, retry attempts, etc.

    ๐Ÿงฉ How It Works

    1. You input an idea (e.g., "cyberpunk alley in heavy rain").
    2. You can add avoid terms (e.g., "blur, soft lighting, extra limbs").
    3. The LLM uses both to generate a positive prompt:
      • The idea is expanded into a structured visual prompt.
      • The avoid terms are used to steer the generation away from unwanted content.
    4. The avoid list is also passed through untouched as the negative prompt.

    ๐Ÿ“ค Outputs

    | Output | Type | Description | |--------|------|-------------| | prompt | STRING | The positive prompt, generated by the LLM. | | avoid | STRING | The negative prompt, returned as provided. |


    ๐Ÿงช Example

    Inputs:

    • idea: haunted subway station with broken lights
    • avoid: blood, gore, screaming

    Outputs:

    • prompt: (LLM-generated)
      "dark abandoned subway, flickering fluorescent lights, cracked tiled walls, shadowy corners, old train cars, graffiti-covered pillars, dim green glow, debris scattered floor"

    • avoid: (Unchanged)
      "blood, gore, screaming"

    Use the prompt as your positive CLIP text, and avoid for negative conditioning.


    โš™๏ธ Configuration Fields

    | Name | Type | Description | |------|------|-------------| | model | Dropdown | Select the Ollama model to use. | | idea | Multiline Text | The concept or image idea. | | avoid | Multiline Text | Words/themes to avoid (used by LLM + passed to negative prompt). | | max_tokens | Int | Maximum allowed tokens for the generated prompt. | | min_tokens | Int | Minimum token target. | | max_attempts | Int | Max retries to hit token range. | | regen_on_each_use | Bool | Force prompt regeneration every time node runs. |


    ๐Ÿ”„ Regeneration Behavior

    If enabled, regen_on_each_use will force the node to re-generate the prompt every time it's executed, ensuring fresh output even if the inputs donโ€™t change.


    ๐Ÿ“„ License

    MIT License