ComfyUI nodes for LLM Structured Outputs with integration for prompting
ComfyUI nodes for (V)LLM Structured Outputs
Generate structured data from text and images, then use that data to create dynamic prompts or make decisions in your workflows.
E.g. you might use a vision LLM to analyze an image and extract descriptions of the foreground, background, and text. Then, using these variables in a format string, you can build a custom prompt such as:
A vector art drawing of {foreground}, set against a minimalistic light {background}.
ā{text}ā in the top right corner is drawn in wide {color} brushstrokes.
comfyui-structured-outputs
via custom nodes manager.env.example
to .env
and add your OpenAI API keyThe Attribute Node represents a single variable in your structured output. Use it to:
foreground
).string
, int
, or bool
.You can chain multiple Attribute Nodes together to form a complete structured output, or use a single node if only one value is needed.
The Structured Output Node generates structured data by making an LLM call. In this node you can:
The output is a set of named variables that the LLM produces.
The Attribute to Text Node converts the structured output into formatted text. To use it:
The sky is {sky_color}
).This node outputs a formatted text prompt based on the structured variables.
This example:
All contributions, bug reports, issues, requests welcome!
To use the ComfyUI LLM Structured Output Nodes, you will need to have an API key for the OpenAI API.
You can get one by signing up at OpenAI.
Copy your key and paste it into a .env
file in the project root.
e.g. See .env.example
OPENAI_KEY="sk-your-key-here"