Run ComfyUI workflows on multiple local GPUs/networked machines with options to edit the json values within comfyui. Original repo: a/city96/ComfyUI_NetDist
Run ComfyUI workflows on multiple local GPUs/networked machines with options to edit the json values within comfyui.
Comfyanonymous; for obvious reasons <br> City96; without the base netdist repo, I wouldn't have attempted this. <br> EventStationAI; for some GPU support. <br> All node creators that I used their work in some ways in the creation of the workflows or code snippets. (Easy Use, Ipadapter_Plus, CR) <br> Claude; what do I do next? Can you debug this error? <br> Ogkai; Thanks for encouraging me to start pushing stuffs I make or modify. <br> *On twitter(X) if you have questions :)
*Remote Latents: I didn't get a chance to test it. <br> *Batched Base64 images: There are existing node that should fix that. <br> *Batch size > 1 for STYLE TRANSFER: Didn't take note of the errors I got but that needs some work. <br>
The use case for this is running T5 and clip L on a different comfy instance so the primary PC can focus on running UNET and VAE.
This workflow is useful for comparing Flux Dev and Schnell models. Since the remote pc runs the Schnell, it is bearable.
This uses a remote pc to run a SDXL ipadapter style transfer pipe.
There is currently a single external requirement, which is the requests
library.
pip install requests
To install, simply clone into the custom nodes folder.
git clone https://github.com/city96/ComfyUI_NetDist ComfyUI/custom_nodes/ComfyUI_NetDist
You will need at least two different ComfyUI instances. You can use two local GPUs by setting different --port [port]
and --cuda-device [number]
launch arguments. You'll most likely want --port 8288 --cuda-device 1
This is the simplest setup for people who have 2 GPUs or two separate PCs. It only requires two nodes to work.
You can set the local/remote batch size, as well as when the node should trigger (set it to 'always' if it isn't getting executed - i.e. you changed a sampler setting but not the seed.)
If you're running your second instance on a different PC, add --listen
to your launch arguments and set the correct remote IP (open a terminal window and check with ipconfig
on windows or ip a
on linux).
The FetchRemote
('Fetch from remote') node takes an image input. This should be your final image than you want to get back from your second instance (make sure not to route it back into itself). This node will wait for the second image to be generated (there's currently no preview/progress bar).
Workflow JSON: NetDistSimple.json
You can kind of scale the example above by connecting more of the simple queue nodes together, but the seed is a bit jank and you can get duplicate images if you try and reuse it. I guess just set the seed to randomized on both.
This is mostly meant for more "advanced" setups with more than two GPUs. It allows easier per-batch overrides as well as setting a default batch size.
It also allows using a workflow JSON as an input. To allow any workflow to run, the final image can be set to "any" instead of the default "final_image" (which would require the FetchRemote
node to be in the workflow).
I have nodes to save/load the workflows, but ideally there would be some nodes to also edit them - search and replace seed, etc. PRs welcome ;P
Workflow JSON: NetDistAdvancedV2.json
(This needs a fake image input to trigger, you can just give it a blank image).
The LoadImageUrl
('Load Image (URL)') Node acts just like the normal 'Load Image' node.
The SaveImageUrl
('Save Image (URL)') Node sends a POST request to the target URL with a json containing the images.
data:image/png;base64
prefix).This node pack has a set of nodes which should (in theory) allow you to pass latents between the nodes seamlessly. A node to save the input latent as a .npy
file is provided. This node also returns the filename of the saved latent, which can then be loaded by the other instance.
To load a latent from the other instance, you can plug the filename into this URL:
# change the filename with a string replacement node.
http://127.0.0.1:8188/view?filename=ComfyUI_00001_.latent&type=output`
# To load them from the input folder instead, change type to 'input'
http://127.0.0.1:8188/view?filename=TestLatent.npy&type=input
The LoadLatentNumpy
node can also load the default safetensor latents, the npy ones (simple numpy file containing just the latent in the standard torch format) as well as the sd_scripts npz cache files.
os.sep
mismatch).