Easy prompting for generation of endless random art pieces and photographs!
This plugin extends ComfyUI with advanced prompt generation capabilities and image analysis using GPT-4 Vision. It includes the following components:
A versatile prompt generator for text-to-image AI systems.
Features:
Analyzes images using OpenAI's GPT-4 Vision model.
Features:
There is a toggle to create movie posters (08/04/24)
Generates text using OpenAI's GPT-4 model based on input text.
Features:
Generates text using custom Ollama based on input text.
Features:
Generates latent representations for use in Stable Diffusion 3 pipelines.
Features:
These classes can be integrated into ComfyUI workflows to enhance prompt generation, image analysis, and latent space manipulation for advanced AI image generation pipelines.
The APNextNode
is a custom node class designed for processing and enhancing prompts with additional contextual information. It's particularly useful for generating creative content by incorporating random elements from predefined categories.
The function supports multiple categories, which are loaded from JSON files in a specific directory structure. Based on the provided image, the supported categories include:
Each category can contain its own set of items and attributes, which are used to enhance the input prompt.
The APNextNode
class is designed to be used within a larger system, likely a node-based content generation pipeline. It processes input prompts and optional category selections to produce an enhanced prompt and a random output.
Required:
prompt
: A multiline string input for the base promptseparator
: A string to separate added elements (default: ",")Optional:
string
: An additional string input (default: "")seed
: An integer seed for random operations (default: 0)attributes
: A boolean to toggle attribute inclusion (default: False)The function returns two strings:
The function expects a specific file structure for category data:
data/
└── next/
└── [CATEGORY_NAME]/
└── [field_name].json
Each JSON file should contain either an array of items or a dictionary with "items", "preprompt", "separator", "endprompt", and "attributes" keys.
This README provides an overview of the APNextNode
function based on the given code snippet. For full implementation details and integration instructions, please refer to the complete source code and any additional documentation provided with the system where this node is used.
The APNextNode
function is designed to be flexible and allow users to add their own categories and fields. This guide will explain how to do this and how to structure the JSON files for new categories.
Create a new folder in the data/next/
directory. The folder name should be lowercase and represent your new category (e.g., data/next/mycategory/
).
Inside this new folder, create one or more JSON files. Each JSON file represents a field within your category. The file name (without the .json extension) will be used as the field name in the APNextNode
function.
The JSON file for each field can have two different structures:
A simple array of items:
[
"item1",
"item2",
"item3"
]
A more detailed structure with additional properties:
{
"preprompt": "Optional text to appear before the selected items",
"separator": ", ",
"endprompt": "Optional text to appear after the selected items",
"items": [
"item1",
"item2",
"item3"
],
"attributes": {
"item1": ["attribute1", "attribute2"],
"item2": ["attribute3", "attribute4"]
}
}
preprompt
: (Optional) Text that appears before the selected items.separator
: (Optional) String used to separate multiple selected items. Default is ", ".endprompt
: (Optional) Text that appears after the selected items.items
: (Required) Array of items that can be selected for this field.attributes
: (Optional) Object where keys are item names and values are arrays of attributes for that item.data/next/visual_effects/
effects.json
:{
"preprompt": "with",
"separator": " and ",
"endprompt": "visual effects",
"items": [
"motion blur",
"lens flare",
"particle effects",
"color grading",
"depth of field"
],
"attributes": {
"motion blur": ["dynamic", "speed-enhancing", "cinematic"],
"lens flare": ["bright", "atmospheric", "sci-fi-inspired"],
"particle effects": ["intricate", "flowing", "ethereal"],
"color grading": ["vibrant", "mood-setting", "stylized"],
"depth of field": ["focused", "bokeh-rich", "photorealistic"]
}
}
After adding your new category and JSON file(s), the APNextNode
function will automatically detect and include it as an optional input. Users can then select items from your new category when using the function for image generation prompts.
For example, using the "Visual Effects" category we just created:
Remember that the APNextNode
function will handle the random selection and formatting based on the JSON structure you provide. This can greatly enhance the variety and specificity of prompts for AI image generation.
Here's a more professional version of the text, formatted as a README.md:
This new family of nodes for ComfyUI offers extensive flexibility and capabilities for prompt engineering and image generation workflows.
The system includes numerous nodes that can be chained together to create complex workflows:
Enhance prompts using the GPT-4 node:
Utilize local language models with the Ollama node:
Create prompts based on images using various vision models:
Automatically incorporate LORA tokens using pre-defined prompts:
Generate completely random prompts without the need for external language models:
set OPENAI_API_KEY=sk-your-api-key-here
Add your own custom folders within comfyui_dagthomas/data/next
with custom properties. These will be loaded in ComfyUI alongside the other nodes.
This project is currently in beta. Detailed documentation is in progress. Explore the various nodes and their capabilities to unlock the full potential of this ComfyUI extension.