This is a UNO ComfyUI plugin implementation that can run the full version with 24GB VRAM, as well as quickly run the FP8 version.
This repository hosts the ComfyUI implementation of UNO (Unity and Novel Output), supporting FLUX models. This implementation includes several new features and optimizations. That can run the full version with 24GB VRAM, as well as quickly run the FP8 version.
You can also access RunningHub online to use this plugin and model for free. Run&Download this Workflow: https://www.runninghub.ai/post/1910316871583789058
flux-dev-fp8
and flux-schnell-fp8
flux-schnell-fp8
offers lower consistency but much faster generation (4 steps)flux-dev
and flux-schnell
in BF16 mode on 24GB GPUsconfig.json
Models are configured in the root config.json
file. The default structure expected is:
ComfyUI/models
flux/
FLUX.1-schnell/ ###download from https://huggingface.co/black-forest-labs/FLUX.1-schnell
text_encoder/
tokenizer/
text_encoder_2/
tokenizer_2/
unet/
flux1-schnell.sft
flux1-dev.sft
vae/
ae.safetensors
UNO/
dit_lora.safetensors
For T5 and CLIP models, there are two organization options:
Single directory (XLabs-AI/xflux_text_encoders style):
"t5-in-one": 1
or "clip-in-one": 1
Official structure (separate directories):
"t5-in-one": 0
or "clip-in-one": 0
"t5"
or "clip"
to the parent directorytext_encoder
and tokenizer
text_encoder_2
and tokenizer_2
VAE:
comfyui/models/vae/
"vae_base"
in config.jsonFLUX Models:
comfyui/models/unet/
"model_base"
in config.jsonDIT-LoRA Models:
comfyui/models/UNO/
"lora_base"
in config.jsonThanks to the original author. Visit the official repository at: https://github.com/bytedance/UNO