ComfyUI Extension: ComfyUI-MixMod
A custom node extension for ComfyUI that allows mixing multiple models during the sampling process for enhanced image generation.
Custom Nodes (0)
README
ComfyUI-MixMod
ComfyUI-MixMod provides a powerful way to combine multiple models during sampling.
VRAM requirement: Fitting two or multiple models. Around 12gb is minimum for two sdxl. Around 16gb vram for sdxl+pixartsigma with a Q3 t5xxl encoder.
Features
- Mix multiple models during sampling
- SD1.5+SDXL (use https://huggingface.co/ostris/sdxl-sd1-vae-lora to align the latents)
- SDXL+Pixart sigma for increased prompt adherence, needs GGUF and ExtraModels for Comfyui
- Scheduling everything
- Multiple experimental modes.
Please share cool workflows you find under discussion, this stuff is really experimental and still needs discoveries.
Installation
- Clone this repository into your ComfyUI's
custom_nodes
directory:
cd ComfyUI/custom_nodes
git clone https://github.com/Kantsche/comfyui-mixmod.git
- Restart ComfyUI if it's already running.
Tips
- Different models excel at different aspects - try mixing a detail-focused model with a composition-focused one
- For FFT modes, base models work well for low frequencies, style models for high frequencies
- Experiment with weights and CFG values to find the best balance
- Schedule models to activate at different sampling steps for creative control
Compatibility
- Tested with SD 1.5, SDXL and Pixart Sigma
- Works with different model architectures (base, inpainting, etc.)
- Only tested on Windows
Example workflow with pixart sigma:
It improves the prompt adherence of sdxl in general prompts:
Example with Ponyv6 and NoobAI:
Only Pony, Pony+Noob, Only Noob image
If you want to support the development and some finetunes: https://ko-fi.com/lodestonerock
Credits
Created by Kantsche