ComfyUI Extension: ComfyUI_FlashVSR
FlashVSR: Towards Real-Time Diffusion-Based Streaming Video Super-Resolution,this node ,you can use it in comfyUI
Custom Nodes (0)
README
ComfyUI_FlashVSR
FlashVSR: Towards Real-Time Diffusion-Based Streaming Video Super-Resolution,this node ,you can use it in comfyUI
Upadte
-
Test cu128 torch2.8.0 Vram 12G
-
Choice vae infer full mode ,encoder infer tiny mode 选择vae跑full模式 效果最好,tiny则是速度
-
如果觉得项目有用,请给官方项目FlashVSR 打星; if you Like it , star the official project link
1.Installation
In the ./ComfyUI/custom_nodes directory, run the following:
git clone https://github.com/smthemex/ComfyUI_FlashVSR
2.requirements
pip install -r requirements.txt
- 如果不追求极致速度,也可以不装Block-Sparse-Attention
git clone https://github.com/mit-han-lab/Block-Sparse-Attention
cd Block-Sparse-Attention
pip install packaging
pip install ninja
python setup.py install
3.checkpoints
- 3.1 FlashVSR all checkpoints 所有模型,vae 用常规的wan2.1
- 3.2 emb posi_prompt.pth 4M而已
├── ComfyUI/models/FlashVSR
| ├── LQ_proj_in.ckpt
| ├── TCDecoder.ckpt
| ├── diffusion_pytorch_model_streaming_dmd.safetensors
| ├── posi_prompt.pth
├── ComfyUI/models/vae
| ├──Wan2.1_VAE.pth
Example
- tiny
Acknowledgements
DiffSynth Studio
Block-Sparse-Attention
taehv
Citation
@misc{zhuang2025flashvsrrealtimediffusionbasedstreaming,
title={FlashVSR: Towards Real-Time Diffusion-Based Streaming Video Super-Resolution},
author={Junhao Zhuang and Shi Guo and Xin Cai and Xiaohui Li and Yihao Liu and Chun Yuan and Tianfan Xue},
year={2025},
eprint={2510.12747},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2510.12747},
}
``
``