ComfyUI Extension: ComfyUI-Distributed
A custom node extension for ComfyUI that enables distributed image generation across multiple GPUs through a master-worker architecture.
Custom Nodes (0)
README
A powerful extension for ComfyUI that enables distributed and parallel processing across multiple GPUs and machines. Generate more images and videos and accelerate your upscaling workflows by leveraging all available GPU resources in your network and cloud.
Key Features
Parallel Workflow Processing
- Run your workflow on multiple GPUs simultaneously with varied seeds, collect results on the master
- Scale output with more workers
- Supports images and videos
Distributed Upscaling
- Accelerate Ultimate SD Upscale by distributing tiles across GPUs
- Intelligent distribution
- Handles single images and batches
Ease of Use
- Auto-setup local workers; easily add remote/cloud ones
- Convert any workflow to distributed with 2 nodes
- JSON configuration with UI controls
Worker Types
<img width="200" align="right" alt="ComfyUI_temp_khvcc_00034_@0 25x" src="https://github.com/user-attachments/assets/651e4912-7c23-4e32-bd88-250f5175e129" />ComfyUI Distributed supports three types of workers:
- Local Workers - Additional GPUs on the same machine (auto-configured on first launch)
- Remote Workers - GPUs on other computers in your network
- Cloud Workers - GPUs hosted on a cloud service like Runpod, accessible via secure tunnels
For detailed setup instructions, see the setup guide
Requirements
- ComfyUI
Note: Desktop app not currently supported
- Multiple NVIDIA GPUs
No additional GPUs? Use Cloud Workers
- That's it
Installation
-
Clone this repository into your ComfyUI custom nodes directory:
git clone https://github.com/robertvoy/ComfyUI-Distributed.git
-
Restart ComfyUI
- If you'll be using remote/cloud workers, add
--enable-cors-header
to your launch arguments on the master
- If you'll be using remote/cloud workers, add
-
Read the setup guide for adding workers
Official Sponsor
Join Runpod with this link and unlock a special bonus.
Workflow Examples
Basic Parallel Generation
Generate multiple images in the time it takes to generate one. Each worker uses a different seed.
- Open your ComfyUI workflow
- Add Distributed Seed → connect to sampler's seed
- Add Distributed Collector → after VAE Decode
- Enable workers in the UI
- Run the workflow!
Parallel WAN Generation
Generate multiple videos in the time it takes to generate one. Each worker uses a different seed.
- Open your WAN ComfyUI workflow
- Add Distributed Seed → connect to sampler's seed
- Add Distributed Collector → after VAE Decode
- Add Image Batch Divider → after Distributed Collector
- Set the
divide_by
to the number of GPUs you have available
For example: if you have a master and 2x workers, set it to 3
- Enable workers in the UI
- Run the workflow!
Distributed Upscaling
Accelerate Ultimate SD Upscaler by distributing tiles across multiple workers, with speed scaling as you add more GPUs.
- Load your image
- Upscale with ESRGAN or similar
- Connect to Ultimate SD Upscale Distributed
- Configure tile settings
If your GPUs are similar, set
static_distribution
to true; otherwise, false
- Enable workers for faster processing
FAQ
<details> <summary>Does it combine VRAM of multiple GPUs?</summary> No, it does not combine VRAM of multiple GPUs. </details> <details> <summary>Does it speed up the generation of a single image or video?</summary> No, it does not speed up the generation of a single image or video. Instead, it enables the generation of more images or videos simultaneously. However, it can speed up the upscaling of a single image when using the Ultimate SD Upscale Distributed feature. </details> <details> <summary>Does it work with the ComfyUI desktop app?</summary> Currently, it is not compatible with the ComfyUI desktop app. </details> <details> <summary>Can I combine my RTX 5090 with a GTX 980 to get faster results?</summary> Yes, you can combine different GPUs, but performance is optimized when using similar GPUs. A significant performance imbalance between GPUs may cause bottlenecks. For upscaling, setting `static_distribution` to `false` allows the faster GPU to handle more processing, which can mitigate some bottlenecks. Note that this setting only applies to upscaling tasks. </details> <details> <summary>Does this work with cloud providers?</summary> Yes, it is compatible with cloud providers. Refer to the setup guides for detailed instructions. </details> <details> <summary>Can I make this work with my Docker setup?</summary> Yes, it is compatible with Docker setups, but you will need to configure your Docker environment yourself. Unfortunately, assistance with Docker configuration is not provided. </details>Disclaimer
This software is provided "as is" without any warranties, express or implied, including merchantability, fitness for a particular purpose, or non-infringement. The developers and copyright holders are not liable for any claims, damages, or liabilities arising from the use, modification, or distribution of the software. Users are solely responsible for ensuring compliance with applicable laws and regulations and for securing their networks against unauthorized access, hacking, data breaches, or loss. The developers assume no liability for any damages or incidents resulting from misuse, improper configuration, or external threats.
Support the Project
<img width="200" align="right" src="https://github.com/user-attachments/assets/84291921-c44e-4556-94f2-a3b16500f4f9" />If my custom nodes have added value to your workflow, consider fueling future development with a coffee!
Your support helps keep this project thriving.
Buy me a coffee at: https://buymeacoffee.com/robertvoy