A prompt generator and CLIP encoder using AI provided by Ollama.
A prompt generator and CLIP encoder using AI provided by Ollama.
Install Ollama and have the service running.
This node has been tested with ollama version 0.4.6
.
Choose one of the following methods to install the node:
If you have the ComfyUI Manager installed, you can install the node from the Install Custom Nodes
.
Search for Ollama Prompt Encode
and click Install
.
If you have the Comfy CLI installed, you can install the node from the command line.
comfy node registry-install comfyui-ollama-prompt-encode
The registry instance can be found on (registry.comfy.org)[https://registry.comfy.org/publishers/michaelstanden/nodes/comfyui-ollama-prompt-encode].
Clone this repository into your <comfyui>/custom_nodes
directory.
cd <comfyui>/custom_nodes
git clone https://github.com/ScreamingHawk/comfyui-ollama-prompt-encode
The Ollama CLIP Prompt Encode
node is designed to replace the default CLIP Text Encode (Prompt)
node. It generates a prompt using the Ollama AI model and then encodes the prompt with CLIP.
The node will output the generated prompt as a string
. This can be viewed with rgthree's Display Any
node.
An example workflow is available in the docs
folder.
The URL to the Ollama service. The default is http://localhost:11434
.
This is the model that is used to generate your prompt.
Some models that work well with this prompt generator are:
orca-mini
mistral
tinyllama
The node will automatically download the model if it is not already present on your system.
Smaller models are recommended for faster generation times.
The seed that will be used to generate the prompt. This is useful for generating the same prompt multiple times or ensuring a different prompt is generated each time.
A string that will be prepended to the generated prompt.
This is useful for models like pony
that work best with extra tags like score_9, score_8_up
.
The text that will be used by the AI model to generate the prompt.
If checked, the node will generate a prompt with a high number of tags separated by commas. e.g. young girl, photorealistic, blue hair
. This is better for models that work better with more tags like pony
.
If unchecked, the node will generate a prompt with a more descriptive prompt. e.g. A photorealistic image of a young girl with blue hair
. This is better for models that work better with more descriptive prompts like Flux
.
Run the tests with:
python -m unittest
This software is provided under the MIT License so it's free to use so long as you give me credit.