Nodes: Ollama, Green Screen to Transparency, Save image for Bjornulf LobeChat, Text with random Seed, Random line from input, Combine images (Background+Overlay alpha), Image to grayscale (black & white), Remove image Transparency (alpha), Resize Image, ...
A list of 110 custom nodes for Comfyui : Display, manipulate, create and edit text, images, videos, loras, generate characters and more.
You can manage looping operations, generate randomized content, trigger logical conditions, pause and manually control your workflows and even work with external AI tools, like Ollama or Text To Speech.
Support me and my work : β€οΈβ€οΈβ€οΈ https://ko-fi.com/bjornulf β€οΈβ€οΈβ€οΈ
1 - π Text/Chat AI generation : Bjornulf Lobe Chat Fork
2 - π Speech AI generation : Bjornulf Text To Speech
<u>3 - π¨ Image AI generation : Bjornulf Comfyui custom nodes (you are here)</u>
1.
π Show (Text, Int, Float)
49.
πΉπ Video Preview
68.
π’ Add line numbers
71.
π Show (Int)
72.
π Show (Float)
73.
π Show (String/Text)
74.
π Show (JSON)
2.
β Write Text
3.
βπ Advanced Write Text (+ π² random selection and π
°οΈ variables)
4.
π Combine Texts
15.
πΎ Save Text
26.
π² Random line from input
28.
π’π² Text with random Seed
32.
π§π Character Description Generator
48.
ππ² Text scrambler (π§ Character)
67.
πββ¨ Text to Anything
68.
β¨βπ Anything to Text
75.
πβπ Replace text
81.
π₯π Text Generator ππ₯
82.
π©βπ¦°π Text Generator (Character Female)
83.
π¨βπ¦°π Text Generator (Character Male)
84.
πΎπ Text Generator (Character Creature)
85.
ππΊπ Text Generator (Character Pose)
86.
π§π¨βπ§π Text Generator (Object for Character)
87.
ππ Text Generator (Scene)
88.
π¨π Text Generator (Style)
89.
π Text Generator (Outfit Female)
90.
π Text Generator (Outfit Male)
91.
β»π₯π List Looper (Text Generator)
92.
β»ππ List Looper (Text Generator Scenes)
93.
β»π¨π List Looper (Text Generator Styles)
94.
β»ππΊπ List Looper (Text Generator Poses)
95.
β»π¨βπ¦°π©βπ¦°πΎ List Looper (Text Generator Characters)
96.
β»π List Looper (Text Generator Outfits Male)
97.
β»π List Looper (Text Generator Outfits Female)
6.
β» Loop
7.
β» Loop Texts
8.
β» Loop Integer
9.
β» Loop Float
10.
β» Loop All Samplers
11.
β» Loop All Schedulers
12.
β» Loop Combos
27.
β» Loop (All Lines from input)
33.
β» Loop (All Lines from input π combine by lines)
38.
β»πΌ Loop (Images)
39.
β» Loop (βπ Advanced Write Text + π
°οΈ variables)
42.
β» Loop (Model+Clip+Vae) - aka Checkpoint / Model
53.
β» Loop Load checkpoint (Model Selector)
54.
β»π Loop Lora Selector
56.
β»π Loop Sequential (Integer)
57.
β»π Loop Sequential (input Lines)
90.
β»π₯π List Looper (Text Generator)
91.
β»ππ List Looper (Text Generator Scenes)
92.
β»π¨π List Looper (Text Generator Styles)
93.
β»ππΊπ List Looper (Text Generator Poses)
94.
β»π¨βπ¦°π©βπ¦°π List Looper (Text Generator Characters)
95.
β»π List Looper (Text Generator Outfits Male)
96.
β»π List Looper (Text Generator Outfits Female)
3.
βπ Advanced Write Text (+ π² random selection and π
°οΈ variables)
5.
π² Random (Texts)
26.
π² Random line from input
28.
π’π² Text with random Seed
37.
π²πΌ Random Image
40.
π² Random (Model+Clip+Vae) - aka Checkpoint / Model
41.
π² Random Load checkpoint (Model Selector)
48.
ππ² Text scrambler (π§ Character)
55.
π²π Random Lora Selector
16.
πΎπΌπ¬ Save image for Bjornulf LobeChat
17.
πΎπΌ Save image as tmp_api.png
Temporary API
18.
πΎπΌπ Save image to a chosen folder name
14.
πΎπΌ Save Exact name
29.
π₯πΌ Load Image with Transparency β’
43.
π₯πΌπ Load Images from output folder
13.
π Resize Image
22.
π² Remove image Transparency (alpha)
23.
π² Image to grayscale (black & white)
24.
πΌ+πΌ Stack two images (Background + Overlay)
25.
π©ββ’ Green Screen to Transparency
29.
β¬οΈπΌ Load Image with Transparency β’
30.
πΌβ Cut image with a mask
37.
π²πΌ Random Image
38.
β»πΌ Loop (Images)
43.
β¬οΈππΌ Load Images from output folder
44.
πΌπ Select an Image, Pick
46.
πΌπ Image Details
47.
πΌ Combine Images
60.
πΌπΌ Merge Images/Videos πΉπΉ (Horizontally)
61.
πΌπΌ Merge Images/Videos πΉπΉ (Vertically)
62.
π¦π Ollama Vision
70.
π Resize Image Percentage
80.
π©· Empty Latent Selector
40.
π² Random (Model+Clip+Vae) - aka Checkpoint / Model
41.
π² Random Load checkpoint (Model Selector)
42.
β» Loop (Model+Clip+Vae) - aka Checkpoint / Model
53.
β» Loop Load checkpoint (Model Selector)
54.
β» Loop Lora Selector
55.
π² Random Lora@ Selector
106.
βπ¨ API Image Generator (FalAI) β
107.
βπ¨ API Image Generator (CivitAI) β
108.
βπ Add Lora (API ONLY - CivitAI) πβ
109.
βπ¨ API Image Generator (Black Forest Labs - Flux) β
110.
βπ¨ API Image Generator (Stability - Stable Diffusion) β
98.
π₯ Load checkpoint SD1.5 (+Download from CivitAi)
99.
π₯ Load checkpoint SDXL (+Download from CivitAi)
100.
π₯ Load checkpoint Pony (+Download from CivitAi)
101.
π₯ Load checkpoint FLUX Dev (+Download from CivitAi)
102.
π₯ Load checkpoint FLUX Schnell (+Download from CivitAi)
103.
π₯π Load Lora SD1.5 (+Download from CivitAi)
104.
π₯π Load Lora SDXL (+Download from CivitAi)
105.
π₯π Load Lora Pony (+Download from CivitAi)
20.
πΉ Video Ping Pong
21.
πΉ Images to Video (FFmpeg)
49.
πΉπ Video Preview
50.
πΌβπΉ Images to Video path (tmp video)
51.
πΉβπΌ Video Path to Images
52.
ππΉ Audio Video Sync
58.
πΉπ Concat Videos
59.
πΉπ Combine Video + Audio
60.
πΌπΌ Merge Images/Videos πΉπΉ (Horizontally)
61.
πΌπΌ Merge Images/Videos πΉπΉ (Vertically)
76.
βπΉ FFmpeg Configuration πΉβ
77.
πΉπ Video details β
78.
πΉβπΉ Convert Video
79.
πΉπ Concat Videos from list
19.
π¦π¬ Ollama Talk
62.
π¦π Ollama Vision
63.
π¦ Ollama Configuration β
64.
π¦ Ollama Job Selector πΌ
65.
π¦ Ollama Persona Selector π§
31.
πβπ TTS - Text to Speech
66.
πβπ STT - Speech to Text
31.
πβπ TTS - Text to Speech
52.
ππΉ Audio Video Sync
59.
πΉπ Combine Video + Audio
66.
πβπ STT - Speech to Text
35.
βΈοΈ Paused. Resume or Stop, Pick π
36.
βΈοΈ Paused. Select input, Pick π
45.
π If-Else (input / compare_with)
Comfyui is great for local usage, but I sometimes need more power than what I have...
I have a computer with a 4070 super with 12GB and flux fp8 simple wokflow take about ~40 seconds. With a 4090 in the cloud I can run flux fp16 in ~12 seconds. (There are of course also some workflow that I can't even run locally.)
My referal link for Runpod : https://runpod.io?ref=tkowk7g5 (If you use that i will have a commission, at no extra cost for you.)
If you want to use my nodes and comfyui in the cloud (and can install more stuff), I'm managing an optimized ready-to-use template on runpod : https://runpod.io/console/deploy?template=r32dtr35u1&ref=tkowk7g5
Template name : bjornulf-comfyui-allin-workspace
, can be operational in ~3 minutes. (Depending on your pod, setup and download of extra models or whatever not included.)
You need to create and select a network volume before using that, size is up to you, i have 50Gb Storage because i use cloud only for Flux or lora training on a 4090. (~0.7$/hour)
β οΈ When pod is ready, you need to open a terminal in browser (After clicking on connect
from your pod) and use this to launch ComfyUI manually : cd /workspace/ComfyUI && python main.py --listen 0.0.0.0 --port 3000
or the alias start_comfy
(Much better to control it with a terminal, check logs, etc...)
After that you can just click on the Connect to port 3000
button.
As file manager, you can use the included JupyterLab
on port 8888.
If you have any issues with it, please let me know.
It will manage everything in Runpod network storage (/workspace/ComfyUI
), so you can stop and start the cloud GPU without losing anything, change GPU or whatever.
Zone : I recommend EU-RO-1
, but up to you.
Top-up your Runpod account with minimum 10$ to start.
β οΈ Warning, you will pay by the minute, so not recommended for testing or learning comfyui. Do that locally !!!
Run cloud GPU only when you already have your workflow ready to run.
Advice : take a cheap GPU for testing, downloading models or settings things up.
To download checkpoint or anything else, you need to use the terminal.
For downloading from Huggingface (get token here https://huggingface.co/settings/tokens).
Here is example for everything you need for flux dev :
huggingface-cli login --token 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
huggingface-cli download black-forest-labs/FLUX.1-dev flux1-dev.safetensors --local-dir /workspace/ComfyUI/models/unet
huggingface-cli download comfyanonymous/flux_text_encoders clip_l.safetensors --local-dir /workspace/ComfyUI/models/clip
huggingface-cli download comfyanonymous/flux_text_encoders t5xxl_fp16.safetensors --local-dir /workspace/ComfyUI/models/clip
huggingface-cli download black-forest-labs/FLUX.1-dev ae.safetensors --local-dir /workspace/ComfyUI/models/vae
To use Flux you can just drag and drop in your browser comfyui interface the .json from my github repo : workflows/FLUX_dev_troll.json
, direct link : https://github.com/justUmen/ComfyUI-BjornulfNodes/blob/main/workflows/FLUX_dev_troll.json.
For downloading from civitai (get token here https://civitai.com/user/account), just copy/paste the link of checkpoint you want to download and use something like that, with your token in URL :
CIVITAI="8b275fada679ba5812b3da2bf35016f6"
wget --content-disposition -P /workspace/ComfyUI/models/checkpoints "https://civitai.com/api/download/models/272376?type=Model&format=SafeTensor&size=pruned&fp=fp16&token=$CIVITAI"
If you want to download for example the entire output folder, you can just compress it :
cd /workspace/ComfyUI/output && tar -czvf /workspace/output.tar.gz .
Then you can download it from the file manager JupyterLab.
If you have any issues with this template from Runpod, please let me know, I'm here to help. π
First you need to find this python_embedded python.exe
, then you can right click or shift + right click inside the folder in your file manager to open a terminal there.
This is where I have it, with the command you need :
H:\ComfyUI_windows_portable\python_embeded> .\python.exe -m pip install pydub ollama opencv-python
When you have to install something you can retake the same code and install the dependency you want :
.\python.exe -m pip install whateveryouwant
You can then run comfyui.
pip install ollama
(you can also install ollama if you want : https://ollama.com/download) - You don't need to really install it if you don't want to use my ollama node. (BUT you need to run pip install ollama
)pip install pydub
(for TTS node)pip install opencv-python
If you want to use a python virtual environment only for comfyUI, which I recommended, you can do that for example (also pre-install pip) :
sudo apt-get install python3-venv python3-pip
python3 -m venv /the/path/you/want/venv/bjornulf_comfyui
Once you have your environment in this new folder, you can activate it with and install dependencies inside :
source /the/path/you/want/venv/bjornulf_comfyui/bin/activate
pip install ollama pydub opencv-python
Then you can start comfyui with this environment (notice that you need to re-activate it each time you want to launch comfyui) :
cd /where/you/installed/ComfyUI && python main.py
ollama_ip.txt
in the comfyui custom nodes folder. Minor changes, add details/updates to README.scrambler/scrambler_character.json
in the comfyui custom nodes folder.Description:
The show node will only display text, or a list of several texts. (read only node)
3 types are managed : Green is for STRING type, Orange is for FLOAT type and blue is for INT type. I put colors so I/you don't try to edit them. π€£
Update 0.61 : You now also have 4 other nodes to display format specific values : INT, FLOAT, STRING and JSON (STRING)
These are convenient because these are automatically recommended on drag and drop.
Description:
Simple node to write text.
Description:
Advanced Write Text node allows for special syntax to accept random variants, like {hood|helmet}
will randomly choose between hood or helmet.
You also have seed
and control_after_generate
to manage the randomness.
It is also displaying the text in the comfyui console. (Useful for debugging)
Example of console logs :
Raw text: photo of a {green|blue|red|orange|yellow} {cat|rat|house}
Picked text: photo of a green house
You can also create and reuse variables with this syntax : <name>
.
Usage example :
Description:
Combine multiple text inputs into a single output. (can have separation with : comma, space, new line or nothing.)
Description:
Generate and display random text from a predefined list. Great for creating random prompts.
You also have control_after_generate
to manage the randomness.
Description:
General-purpose loop node, you can connect that in between anything.
It has an optional input, if no input is given, it will loop over the value of the STRING "if_no_input" (take you can edit).
β Careful this node accept everything as input and output, so you can use it with texts, integers, images, mask, segs etc... but be consistent with your inputs/outputs.
Do not use this Loop if you can do otherwise.
This is an example together with my node 28, to force a different seed for each iteration :
Description:
Cycle through a list of text inputs.
Here is an example of usage with combine texts and flux :
Description:
Iterate through a range of integer values, good for steps
in ksampler, etc...
β Don't forget that you can convert ksampler widgets to input by right-clicking the ksampler node :
Here is an example of usage with ksampler (Notice that with "steps" this node isn't optimized, but good enough for quick testing.) :
Description:
Loop through a range of floating-point numbers, good for cfg
, denoise
, etc...
Here is an example with controlnet, trying to make a red cat based on a blue rabbit :
Description:
Iterate over all available samplers to apply them one by one. Ideal for testing.
Here is an example of looping over all the samplers with the normal scheduler :
Description:
Iterate over all available schedulers to apply them one by one. Ideal for testing. (same idea as sampler above, but for schedulers)
Description:
Generate a loop from a list of my own custom combinations (scheduler+sampler), or select one combo manually.
Good for testing.
Example of usage to see the differences between different combinations :
Description:
Resize an image to exact dimensions. The other node will save the image to the exact path.
β οΈπ£ Warning : The image will be overwritten if it already exists.
Description:
Save the given text input to a file. Useful for logging and storing text data.
If the file already exist, it will add the text at the end of the file.
Description:
β I made that node for my custom lobe-chat to send+receive images from Comfyui API : lobe-chat
It will save the image in the folder output/BJORNULF_LOBECHAT/
.
The name will start at api_00001.png
, then api_00002.png
, etc...
It will also create a link to the last generated image at the location output/BJORNULF_API_LAST_IMAGE.png
.
This link will be used by my custom lobe-chat to copy the image inside the lobe-chat project.
tmp_api.png
Temporary API β οΈπ£Description:
Save image for short-term use : ./output/tmp_api.png β οΈπ£
Description:
Save image in a specific folder : my_folder/00001.png
, my_folder/00002.png
, etc...
Also allow multiple nested folders, like for example : animal/dog/small
.
Description:
Use Ollama inside Comfyui. (Require the backend Ollama to be installed and currently running.)
Use by default the model llama3.2:3b
and the URL http://0.0.0.0:11434
. (For custom configuration, use node 63)
Example of basic usage :
Example of usage with context, notice that with context you can follow up a conversation, "there" is clearly understood as "Bucharest" :
You can also use use_context_file
(set to True), this will save the context in a file : ComfyUI/Bjornulf/ollama_context.txt
.
This way you can keep using the context without having to connect many nodes connected to each other, just run the same workflow several times.
Step 1 : Notice that for now context is empty, so it will be the first message in ComfyUI/Bjornulf/ollama_context.txt
:
Step 2 : Notice that now the number of lines in context file has changed (These are the same as the updated_context
):
Step 3 : Notice that the number of lines keep incrementing.
When clicking the reset Button
, it will also save the context in : ComfyUI/Bjornulf/ollama_context_001.txt
, ComfyUI/Bjornulf/ollama_context_002.txt
, etc...
β οΈ If you want to have an "interactive" conversation, you can enable the option waiting_for_prompt
.
When set to True, it will create a Resume
button, use this to unpause the node and process the prompt.
Step 1: I run the workflow, notice that Show node is empty, the node is pausing the workflow and is waiting for you to edit the prompt. (Notice that at this moment, it is asking for the capital of France.)
Step 2: I edit the prompt to change France into China, but the node won't process the request until you click on Resume.
Step 3: I click on Resume button, this is when the request is done. Notice that it used China and not France.
Other options :
control_after_generate
to force the node to rerun for every workflow run. (Even if there is no modification of the node or its inputs.)max_tokens
to reduce the size of the answer, a token is about 3 english characters.β οΈ Warning : Using vram_retention_minutes
might be a bit heavy on your VRAM. Think about if you really need it or not. Most of the time, when using vram_retention_minutes
, you don't want to have also a generation of image or anything else in the same time.
Description:
Create a ping-pong effect from a list of images (from a video) by reversing the playback direction when reaching the last frame. Good for an "infinity loop" effect.
Description:
Combine a sequence of images into a video file.
β I made this node because it supports transparency with webm format. (Needed for rembg)
Temporary images are stored in the folder ComfyUI/temp_images_imgs2video/
as well as the wav audio file.
Description:
Remove transparency from an image by filling the alpha channel with a solid color. (black, white or greenscreen)
Of course it takes in an image with transparency, like from rembg nodes.
Necessary for some nodes that don't support transparency.
Description:
Convert an image to grayscale (black & white)
Example : I sometimes use it with Ipadapter to disable color influence.
But you can sometimes also want a black and white image...
Description:
Stack two images into a single image : a background and one (or several) transparent overlay. (allow to have a video there, just send all the frames and recombine them after.)
Update 0.11 : Add option to move vertically and horizontally. (from -50% to 150%)
β Warning : For now, background
is a static image. (I will allow video there later too.)
β οΈ Warning : If you want to directly load the image with transparency, use my node πΌ Load Image with Transparency β’
instead of the Load Image
node.
Description:
Transform greenscreen into transparency.
Need clean greenscreen ofc. (Can adjust threshold but very basic node.)
Description:
Take a random line from an input text. (When using multiple "Write Text" nodes is annoying for example, you can use that and just copy/paste a list from outside.)
You can change fixed/randomize for control_after_generate
to have a different text each time you run the workflow. (or not)
Description:
Iterate over all lines from an input text. (Good for testing multiple lines of text.)
Description:
β This node is used to force to generate a random seed, along with text.
But what does that mean ???
When you use a loop (β»), the loop will use the same seed for each iteration. (That is the point, it will keep the same seed to compare results.)
Even with randomize
for control_after_generate
, it is still using the same seed for every loop, it will change it only when the workflow is done.
Simple example without using random seed node : (Both images have different prompt, but same seed)
So if you want to force using another seed for each iteration, you can use this node in the middle.
For example, if you want to generate a different image every time. (aka : You use loop nodes not to compare or test results but to generate multiple images.)
Use it like that for example : (Both images have different prompt AND different seed)
Here is an example of the similarities that you want to avoid with FLUX with different prompt (hood/helmet) but same seed :
Here is an example of the similarities that you want to avoid with SDXL with different prompt (blue/red) but same seed :
FLUX : Here is an example of 4 images without Random Seed node on the left, and on the right 4 images with Random Seed node :
Description:
Load an image with transparency.
The default Load Image
node will not load the transparency.
Description:
Cut an image from a mask.
Description:
Use my TTS server to generate high quality speech from text, with any voice you want, any language.
Listen to the audio example
β Node never tested on windows, only on linux for now. β
Use my TTS server to generate speech from text, based on XTTS v2.
β Of course to use this comfyui node (frontend) you need to use my TTS server (backend) : https://github.com/justUmen/Bjornulf_XTTS
I made this backend for https://github.com/justUmen/Bjornulf_lobe-chat, but you can use it with comfyui too with this node.
After having Bjornulf_XTTS
installed, you NEED to create a link in my Comfyui custom node folder called speakers
: ComfyUI/custom_nodes/Bjornulf_custom_nodes/speakers
That link must be a link to the folder where you installed/stored the voice samples you use for my TTS, like default.wav
.
If my TTS server is running on port 8020 (You can test in browser with the link http://localhost:8020/tts_stream?language=en&speaker_wav=default&text=Hello) and voice samples are good, you can use this node to generate speech from text.
Details
This node should always be connected to a core node : Preview audio
.
My node will generate and save the audio files in the ComfyUI/Bjornulf_TTS/
folder, followed by the language selected, the name of the voice sample, and the text.
Example of audio file from the screenshot above : ComfyUI/Bjornulf_TTS/Chinese/default.wav/δ½ εδΊε.wav
You can notice that you don't NEED to select a chinese voice to speak chinese. Yes it will work, you can record yourself and make yourself speak whatever language you want.
Also, when you select a voice with this format fr/fake_Bjornulf.wav
, it will create an extra folder fr
of course. : ComfyUI/Bjornulf_TTS/English/fr/fake_Bjornulf.wav/hello_im_me.wav
. Easy to see that you are using a french voice sample for an english recording.
control_after_generate
as usual, it is used to force the node to rerun for every workflow run. (Even if there is no modification of the node or its inputs.)
overwrite
is used to overwrite the audio file if it already exists. (For example if you don't like the generation, just set overwrite to True and run the workflow again, until you have a good result. After you can set it to back to False. (Paraphrasing : without overwrite set to True, It won't generate the audio file again if it already exists in the Bjornulf_TTS
folder.)
autoplay
is used to play the audio file inside the node when it is executed. (Manual replay or save is done in the preview audio
node.)
So... note that if you know you have an audio file ready to play, you can still use my node but you do NOT need my TTS server to be running.
My node will just play the audio file if it can find it, won't try to connect th backend TTS server.
Let's say you already use this node to create an audio file saying workflow is done
with the Attenborough voice :
As long as you keep exactly the same settings, it will not use my server to play the audio file! You can safely turn the TTS server off, so it won't use your precious VRAM Duh. (TTS server should be using ~3GB of VRAM.)
Also connect_to_workflow
is optional, it means that you can make a workflow with ONLY my TTS node to pre-generate the audio files with the sentences you want to use later, example :
If you want to run my TTS nodes along side image generation, i recommend you to use my PAUSE node so you can manually stop the TTS server after my TTS node. When the VRAM is freed, you can the click on the RESUME button to continue the workflow.
If you can afford to run both at the same time, good for you, but Locally I can't run my TTS server and FLUX at the same time, so I use this trick. :
Description:
Generate a character description based on a json file in the folder characters
: ComfyUI/custom_nodes/Bjornulf_custom_nodes/characters
Make your own json file with your own characters, and use this node to generate a description.
β For now it's very basic node, a lot of things are going to be added and changed !!!
Some details are unusable for some checkpoints, very much a work in progress, the json structure isn't set in stone either.
Some characters are included.
Description:
Sometimes you want to loop over several inputs but you also want to separate different lines of your output.
So with this node, you can have the number of inputs and outputs you want. See example for usage.
Description:
So this is my attempt at freeing up VRAM after usage, I will try to improve that.
For me, on launch ComfyUI is using 180MB of VRAM, after my clean up VRAM node it can go back down to 376MB.
I don't think there is a clean way to do that, so I'm using a hacky way.
So, not perfect but better than being stuck at 6GB of VRAM used if I know I won't be using it again...
Just connect this node with your workflow, it takes anything as input and return it as output.
You can therefore put it anywhere you want.
β Comfyui is using cache to run faster (like not reloading checkpoints), so only use this free VRAM node when you need it.
β For this node to work properly, you need to enable the dev/api mode in ComfyUI. (You can do that in the settings)
It is also running an "empty/dummy" workflow to free up the VRAM, so it might take a few seconds to take effect after the end of the workflow.
Description:
Automatically pause the workflow, and rings a bell when it does. (play the audio bell.m4a
file provided)
You can then manually resume or stop the workflow by clicking on the node's buttons.
I do that let's say for example if I have a very long upscaling process, I can check if the input is good before continuing. Sometimes I might stop the workflow and restart it with another seed.
You can connect any type of node to the pause node, above is an example with text, but you can send an IMAGE or whatever else, in the node input = output
. (Of course you need to send the output to something that has the correct format...)
Description:
Automatically pause the workflow, and rings a bell when it does. (play the audio bell.m4a
file provided)
You can then manually select the input you want to use, and resume the workflow with it.
You can connect this node to anything you want, above is an example with IMAGE. But you can pick whatever you want, in the node input = output
.
Description:
Just take a random image from a list of images.
Description:
Loop over a list of images.
Usage example : You have a list of images, and you want to apply the same process to all of them.
Above is an example of the loop images node sending them to an Ipadapter workflow. (Same seed of course.)
Description:
If you need a quick loop but you don't want something too complex with a loop node, you can use this combined write text + loop.
It will take the same special syntax as the Advanced write text node {blue|red}
, but it will loop over ALL the possibilities instead of taking one at random.
0.40 : You can also use variables <name>
in the loop.
Description:
Just simply take a trio at random from a load checkpoint node.
Notice that it is using the core Load checkpoint node. It means that all checkpoint will be preloaded in memory.
Details :
Check node number 41 before deciding which one to use.
Description:
This is another way to select a load checkpoint node randomly.
It will not preload all the checkpoints in memory, so it will be slower to switch between checkpoints.
But you can use more outputs to decide where to store your results. (model_folder
is returning the last folder name of the checkpoint.)
I always store my checkpoints in a folder with the type of the model like SD1.5
, SDXL
, etc... So it's a good way for me to recover that information quickly.
Details :
CLIP Set Last Layer
node set at -2 for a specific model, or a separate vae or clip.) Aka : All models are going to share the exact same workflow.Check node number 40 before deciding which one to use.
Node 53 is the loop version of this node.
NOTE : If you want to load a single checkpoint but want to extract its folder name (To use the checkpoint name as a folder name for example, or with if/else node), you can use my node 41 with only one checkpoint. (It will take one at random, so... always the same one.)
Description:
Loop over all the trios from several checkpoint node.
Description:
Quickly select all images from a folder inside the output folder. (Not recursively.)
So... As you can see from the screenshot the images are split based on their resolution.
It's also not possible to edit dynamically the number of outputs, so I just picked a number : 4.
The node will separate the images based on their resolution, so with this node you can have 4 different resolutions per folder. (If you have more than that, maybe you should have another folder...)
To avoid error or crash if you have less than 4 resolutions in a folder, the node will just output white tensors. (white square image.)
So this node is a little hacky for now, but i can select my different characters in less than a second.
If you want to know how i personnaly save my images for a specific character, here is part of my workflow (Notice that i personnaly use / for folders because I'm on linux) :
In this example I put "character/" as a string and then combine with "nothing". But it's the same if you do "character" and then combine with "/". (I just like having a / at the end of my folder's name...)
If you are satisfied with this logic, you can then select all these nodes, right click and Convert to Group Node
, you can then have your own customized "save character node" :
Here is another example of the same thing but excluding the save folder node :
β οΈ If you really want to regroup all the images in one flow, you can use my node 47 Combine images
to put them all together.
Description:
Select an image from a list of images.
Useful in combination with my Load images from folder and preview image nodes.
You can also of course make a group node, like this one, which is the same as the screenshot above :
Description:
Complex logic node if/else system.
If the input
given is equal to the compare_with
given in the widget, it will forward send_if_true
, otherwise it will forward send_if_false
. (If no send_if_false
it will return None
.)
You can forward anything, below is an example of forwarding a different size of latent space depending if it's SDXL or not.
Here is an example of the node with all outputs displayed with Show text nodes :
send_if_false
is optional, if not connected, it will be replaced by None
.
If-Else are chainables, just connect output
to send_if_false
.
β οΈ Always simply test input
with compare_with
, and connect the desired value to send_if_true
. β οΈ
Here a simple example with 2 If-Else nodes (choose between 3 different resolutions).
β Notice that the same write text node is connected to both If-Else nodes input :
Let's take a similar example but let's use my Write loop text node to display all 3 types once :
If you understood the previous examples, here is a complete example that will create 3 images, landscape, portrait and square :
Workflow is hidden for simplicity, but is very basic, just connect latent to Ksampler, nothing special.)
You can also connect the same advanced loop write text node with my save folder node to save the images (landscape/portrait/square) in separate folders, but you do you...
Description:
Display the details of an image. (width, height, has_transparency, orientation, type)
RGBA
is considered as having transparency, RGB
is not.
orientation
can be landscape
, portrait
or square
.
Description:
Combine multiple images (A single image or a list of images.)
If you want to merge several images into a single image, check node 60 or 61.
There are two types of logic to "combine images". With "all_in_one" enabled, it will combine all the images into one tensor.
Otherwise it will send the images one by one. (check examples below) :
This is an example of the "all_in_one" option disabled (Note that there are 2 images, these are NOT side by side, they are combined in a list.) :
But for example, if you want to use my node select an image, pick
, you need to enable all_in_one
and the images must all have the same resolution.
You can notice that there is no visible difference when you use all_in_one
with preview image
node. (this is why I added the show text
node, note that show text will make it blue, because it's an image/tensor.)
When you use combine image
node, you can actually also send many images at once, it will combine them all.
Here is an example with Load images from folder
node, Image details
node and Combine images
node. (Of course it can't have all_in_one
set to True in this situation because the images have different resolutions) :
Here another simple example taking a few selected images from a folder and combining them (For later processing for example) :
Description:
Take text as input and scramble (randomize) the text by using the file scrambler/character_scrambler.json
in the comfyui custom nodes folder.
Description:
This node takes a video path as input and displays the video.
Description:
This node will take a list of images and convert them to a temporary video file.
β Update 0.50 : You can now send audio to the video. (audio_path OR audio TYPE)
Description:
This node will take a video path as input and convert it to a list of images.
In the above example, I also take half of the frames by setting frame_interval
to 2.
Note that i had 16 frames, on the top right preview you can see 8 images.
Description:
This node is an overengineered node that will try to synchronize the duration of an audio file with a video file.
β Video ideally needs to be a loop, check my ping pong video node if needed.
The main goal of this synchronization is to have a clean transition between the end and the beginning of the video. (same frame)
You can then chain up several video and they will transition smoothly.
Some details, this node will :
It is good like for example with MuseTalk https://github.com/chaojie/ComfyUI-MuseTalk
Here is an example of the Audio Video Sync
node, notice that it is also convenient to recover the frames per second of the video, and send that to other nodes. (Spaghettis..., deal with it. π If you don't understand it, you can test it.) :
β Update 0.50 : audio_duration is now optional, if not connected it will take it from the audio.
β Update 0.50 : You can now send the video with a list of images OR a video_path, same for audio : AUDIO or audio_path.
New v0.50 layout, same logic :
Description:
This is the loop version of node 41. (check there for similar details)
It will loop over all the selected checkpoints.
β The big difference with 41 is that checkpoints are preloaded in memory. You can run them all faster all at once.
It is a good way to test multiple checkpoints quickly.
Description:
Loop over all the selected Loras.
Above is an example with Pony and several styles of Lora.
Below is another example, here with flux, to test if your Lora training was undertrained, overtrained or just right :
Description:
Just take a single Lora at random from a list of Loras.
Description:
This loop works like a normal loop, BUT it is sequential : It will run only once for each workflow run !!!
The first time it will output the first integer, the second time the second integer, etc...
When the last is reached, the node will STOP the workflow, preventing anything else to run after it.
Under the hood it is using a single file counter_integer.txt
in the ComfyUI/Bjornulf
folder.
β Do not use more than one node like this one in a workflow, because they will share the same counter_integer.txt
file. (unexpected behaviour.)
Update 0.57: Now also contains the next counter in the reset button.
Description:
This loop works like a normal loop, BUT it is sequential : It will run only once for each workflow run !!!
The first time it will output the first line, the second time the second line, etc...
You also have control of the line with +1 / -1 buttons.
When the last is reached, the node will STOP the workflow, preventing anything else to run after it.
Under the hood it is using the file counter_lines.txt
in the ComfyUI/Bjornulf
folder.
Here is an example of usage with my TTS node : when I have a list of sentences to process, if i don't like a version, I can just click on the -1 button, tick "overwrite" on TTS node and it will generate the same sentence again, repeat until good.
β Do not use more than one node like this one in a workflow, because they will share the same counter_lines.txt
file. (unexpected behaviour.)
Update 0.57: Now also contains the next counter in the reset button.
If you want to be able to predict the next line, you can use node 68, to Add line numbers.
Description:
Take two videos and concatenate them. (One after the other in the same video.)
Convert a video, can use FFMPEG_CONFIG_JSON. (From node 76 / 77)
Description:
Simply combine video and audio together.
Video : Use list of images or video path.
Audio : Use audio path or audio type.
Description:
Merge images or videos horizontally.
Here is one possible example for videos with node 60 and 61 :
Description:
Merge images or videos vertically.
Here is one possible example for videos with node 60 and 61 :
Description:
Take an image as input and will describe the image. Uses moondream
by default, but can select anything with node 63.
Description:
Use custom configurations for Ollama Talk and Vision.
You can change the ollama Url and the model used.
Some vision models can also do text to a certain extent.
Example of a Ollama Vision Node
and Ollama Talk Node
using the same Ollama Configuration Node
:
Description:
Select a personnality for your Ollama Talk Node, set it to None
for just chat.
If you want to write your own, just set it to None
and write your prompt as prefix.
Description:
Select a personnality for your Ollama Talk Node.
If you want to write your own, just set it to None
and write your prompt as prefix.
Below, an example of a crazy scientist explaining gravity. (Notice that the LLM was smart enough to understand the typo) :
Description:
Use faster-whisper
to transform an AUDIO type or audio_path into text. (Autodetect language)
Description:
Sometimes you want to force a node to accept a STRING.
You can't do that for example if the node is taking a LIST as input.
This node can be used in the middle to force a STRING to be used anyway.
Below is an example of that with my TTS node.
Description:
Sometimes you want to force something to be a STRING.
Most outputs are indeed text, even though they might be unusable.
This node ignore this fact and simply convert the input to a simple STRING.
Description:
This node will just add line numbers to text.
Useful when you want to use node 57 that will loop over input lines. (You can read/predict the next line.)
Description:
Resize an image by percentage.
Description:
Basic node, show an INT. (You can simply drag any INT node and it will be recommended.)
Description:
Basic node, show a FLOAT. (You can simply drag any FLOAT node and it will be recommended.)
Description:
Basic node, show a STRING. (You can simply drag any STRING node and it will be recommended.)
Description:
This node will take a STRING and format it as a readable JSON. (and pink)
Description:
Replace text with another text, allow regex and more options, check examples below :
Description:
Create a FFMPEG_CONFIG_JSON, it will contains a JSON that can be used by other nodes :
Description:
Extract details from a video_path.
You can use the all-in-one FFMPEG_CONFIG_JSON with other nodes or just use the other variables as your want.
Description:
Convert a video, can use FFMPEG_CONFIG_JSON.
Description:
Take a list of videos (one per line) and concatenate them. (One after the other in the same video.)
Can use FFMPEG_CONFIG_JSON. (From node 76 / 77)
Description:
Tired of setting up latent space manually ?
Select one from my custom list of formats.
Just connect that to your KSampler.
Description:
Main node to generate content, doesn't really do much by itself, just camera angle
and multicharacter action
. (For example : ... eat picnic, view from above.
)
BUT, you can connect others Text Generator Nodes
to it.
β οΈ Warning for "Text Generator" : This node is JUST writing text, text is then interpreted by a checkpoint (SD1.5, SDXL, Flux...) to generate an image.
Some models are very bad at doing some things, so DON'T EXPECT for everything you do to work properly all the time with every checkpoints or loras. (This node was made with FLUX in mind.)
Below is a Tutorial on how to use all my Text Generator nodes
. I did that small tutorial in 8 steps:
Step 1 : You use the main Text Generator node, it will write general details about the image (here camera_angle
and shot_type
) - For now I just combine the text to a simple "write text" that will send swamp monster
:
Step 2 : Add a specific style to your image :
Step 3 : Add scene/background to your image :
Step 4 : Add a character to the scene using a character node, instead of the Write text Node.
I will remove the "swamp monster" from the "write text node" and use my Character Node instead, I will use it to create an agressive dragon with lighting powers :
Step 5 : Character nodes (Male/Female and creatures) can contain more than one character. (But they will share the same characteristics)
Below I removed the dragon and I created 2 "Character male" fighting by using the multi_char_action
from the main node. (You can set it to CUSTOM and write your own action too.)
Step 5 : Let's try to add a location for the character, I want to put it on the left of the image. Here is a failure with the SDXL model I have been using all along :
Step 6 : Switch to FLUX to test the location_on_image
feature (which is working) :
Step 7 : Switch to API black Forest Lab with FLUX Ultra, using my API custom node 109.
If you want several characters with different characteristics (like location_on_image
or whatever), you can chain several Character Nodes together by connecting them to each other.
You can see below that I asked for 2 tiny dragons on the left and a zombie on the right.
Step 8 : And to end this tutorial, I will disable the Zombie, I will add an outfit (here a floral armor
), I will also add a pose
node for the character and also connect this pose node to an object
node. (They will together make the character hold a book
and put his hand on chin
)
Description:
Generate text related to female characters.
Need to be connected to "Text Generator" main node.
β οΈ For "Text Generator" tutorial see node 81.
Description:
Generate text related to male characters.
β οΈ For "Text Generator" tutorial see node 81.
Description:
Generate text related to creatures. (characters)
β οΈ For "Text Generator" tutorial see node 81.
Description:
Generate text related to the pose of characters.
β οΈ For "Text Generator" tutorial see node 81.
Description:
Generate text related to an object connected to a pose, that is connected to a character.
β οΈ For "Text Generator" tutorial see node 81.
Description:
Generate text related to a specific scene, connects directly to the main text generator.
β οΈ For "Text Generator" tutorial see node 81.
Description:
Generate text related to a specific style, connects directly to the main text generator.
β οΈ For "Text Generator" tutorial see node 81.
Description:
Generate text related to a specific female outfit.
β οΈ For "Text Generator" tutorial see node 81.
Description:
Generate text related to a specific male outfit.
β οΈ For "Text Generator" tutorial see node 81.
Description:
Loop made to loop over elements for the main node text generator.
All the List Looper
nodes have the same logic, you should be able to use them all the same way.
Here is an example with node 92 (list looper scenes), looping over all the different weather_condition
:
β οΈ Note, if you want to Loop over the elements One by One
, Not all-in one, DO NOT use this list looper nodes
!!
You can just convert the element you want as input and double click to create a new node that you can set to "increment".
Example, here you can see that the value was "incremented", aka changed to the next from the list, the next run will then have the next value from the list (and so on) :
Description:
Loop made to loop over elements for the node scenes.
β οΈ For "List Looper" tutorial see node 91.
Description:
Loop made to loop over elements for the node style.
β οΈ For "List Looper" tutorial see node 91.
Description:
Loop made to loop over elements for the node poses.
β οΈ For "List Looper" tutorial see node 91.
Description:
Loop made to loop over elements for the node charceter (male/female/creature).
β οΈ For "List Looper" tutorial see node 91.
Description:
Loop made to loop over elements for the node for male outfits.
β οΈ For "List Looper" tutorial see node 91.
Description:
Loop made to loop over elements for the node for female outfits.
β οΈ For "List Looper" tutorial see node 91.
Description:
This is the same as a basic "Load checkpoint" node, but the list is from civitai (not your local folder).
It will also download the file from civitai if you don't have it on your computer yet. (You need an api token from your account. - Find yours on civitai.com settings. -)
This is the sd1.5
version, it will download the models in : ComfyUI/models/checkpoints/Bjornulf_civitAI/sd1.5
After downloading, you can keep using this node as is to load your checkpoint, or use the downloaded model from a basic "Load checkpoint" node.
Description:
This is the same as a basic "Load checkpoint" node, but the list is from civitai (not your local folder).
It will also download the file from civitai if you don't have it on your computer yet. (You need an api token from your account. - Find yours on civitai.com settings. -)
This is the sdxl_1.0
version, it will download the models in : ComfyUI/models/checkpoints/Bjornulf_civitAI/sdxl_1.0
After downloading, you can keep using this node as is to load your checkpoint, or use the downloaded model from a basic "Load checkpoint" node.
Description:
This is the same as a basic "Load checkpoint" node, but the list is from civitai (not your local folder).
It will also download the file from civitai if you don't have it on your computer yet. (You need an api token from your account. - Find yours on civitai.com settings. -)
This is the pony
version, it will download the models in : ComfyUI/models/checkpoints/Bjornulf_civitAI/pony
After downloading, you can keep using this node as is to load your checkpoint, or use the downloaded model from a basic "Load checkpoint" node.
Description:
This is the same as a basic "Load checkpoint" node, but the list is from civitai (not your local folder).
It will also download the file from civitai if you don't have it on your computer yet. (You need an api token from your account. - Find yours on civitai.com settings. -)
This is the flux_d
version, it will download the models in : ComfyUI/models/checkpoints/Bjornulf_civitAI/flux_d
After downloading, you can keep using this node as is to load your checkpoint, or use the downloaded model from a basic "Load checkpoint" node.
π§ Work in progress, need to manually clean up list, diffusers, etc.. ? π§
Description:
This is the same as a basic "Load checkpoint" node, but the list is from civitai (not your local folder).
It will also download the file from civitai if you don't have it on your computer yet. (You need an api token from your account. - Find yours on civitai.com settings. -)
This is the flux_s
version, it will download the models in : ComfyUI/models/checkpoints/Bjornulf_civitAI/flux_s
After downloading, you can keep using this node as is to load your checkpoint, or use the downloaded model from a basic "Load checkpoint" node.
π§ Work in progress, need to manually clean up list, diffusers, etc.. ? π§
Description:
This is the same as a basic "Load lora" node, but the list is from civitai (not your local folder).
It will also download the file from civitai if you don't have it on your computer yet. (You need an api token from your account. - Find yours on civitai.com settings. -)
This is the sd_1.5
version, it will download the lora in : ComfyUI/models/loras/Bjornulf_civitAI/sd_1.5
After downloading, you can keep using this node as is to load your lora, or use the downloaded lora from a basic "Load lora" node.
Below is an example with Lora "Colorize" :
Description:
This is the same as a basic "Load lora" node, but the list is from civitai (not your local folder).
It will also download the file from civitai if you don't have it on your computer yet. (You need an api token from your account. - Find yours on civitai.com settings. -)
This is the sdxl_1.0
version, it will download the lora in : ComfyUI/models/loras/Bjornulf_civitAI/sdxl_1.0
After downloading, you can keep using this node as is to load your lora, or use the downloaded lora from a basic "Load lora" node.
Below is an example with Lora "Better faces" :
Description:
This is the same as a basic "Load lora" node, but the list is from civitai (not your local folder).
It will also download the file from civitai if you don't have it on your computer yet. (You need an api token from your account. - Find yours on civitai.com settings. -)
This is the pony
version, it will download the lora in : ComfyUI/models/loras/Bjornulf_civitAI/pony
After downloading, you can keep using this node as is to load your lora, or use the downloaded lora from a basic "Load lora" node.
Description:
Generate images with only a token.
This is the fal.ai
version and will save the image in ComfyUI/output/API/CivitAI/
Description:
Generate images with only a token.
This is the civit.ai
version and will save the image in ComfyUI/output/API/CivitAI/
β οΈ Warning : Civitai isn't the best reliable API, sometimes it doesn't answer, or take long time to answer, some urn don't answer as well as others, etc...
Use it at your own risks, I do not recommend running anything "costly" using their API, like Flux Ultra, etc... (Use the website instead with blue buzz)
API requests (like from this node) are using yellow buzz.
Description:
Use lora with the API, below is an example to see clearly with the same seed the different with/without/lora.
Description:
Generate an image with the Black Forest Labs API. (flux)
Description:
Generate an image with the Stability API. (sd3)