ComfyUI Extension: ComfyUI-FlashVSR_Ultra_Fast
Running FlashVSR on lower VRAM without any artifacts.
Custom Nodes (0)
README
ComfyUI-FlashVSR_Ultra_Fast
Running FlashVSR on lower VRAM without any artifacts.
[πδΈζηζ¬]
Changelog
2025-10-24
- Added long video pipeline that significantly reduces VRAM usage when upscaling long videos.
2025-10-21
- Initial this project, introducing features such as
tile_ditto significantly reducing VRAM usage.
2025-10-22
- Replaced
Block-Sparse-AttentionwithSparse_Sage, removing the need to compile any custom kernels. - Added support for running on RTX 50 series GPUs.
Preview

Usage
- mode:
tiny-> faster (default);full-> higher quality - scale:
4is always better, unless you are low on VRAM then use2 - color_fix:
Use wavelet transform to correct the color of output video. - tiled_vae:
Set to True for lower VRAM consumption during decoding at the cost of speed. - tiled_dit:
Significantly reduces VRAM usage at the cost of speed. - tile_size, tile_overlap:
How to split the input video. - unload_dit:
Unload DiT before decoding to reduce VRAM peak at the cost of speed.
Installation
nodes:
cd ComfyUI/custom_nodes
git clone https://github.com/lihaoyun6/ComfyUI-FlashVSR_Ultra_Fast.git
python -m pip install -r ComfyUI-FlashVSR_Ultra_Fast/requirements.txt
models:
- Download the entire
FlashVSRfolder with all the files inside it from here and put it in theComfyUI/models
βββ ComfyUI/models/FlashVSR
| βββ LQ_proj_in.ckpt
| βββ TCDecoder.ckpt
| βββ diffusion_pytorch_model_streaming_dmd.safetensors
| βββ Wan2.1_VAE.pth
Acknowledgments
- FlashVSR @OpenImagingLab
- Sparse_SageAttention @jt-zhang
- ComfyUI @comfyanonymous