ComfyUI Extension: ComfyUI_DiffuEraser
DiffuEraser is a diffusion model for video Inpainting, you can use it in ComfyUI
Custom Nodes (3)
README
ComfyUI_DiffuEraser
DiffuEraser is a diffusion model for video Inpainting, you can use it in ComfyUI
Update
- 使用官方推荐的vae文件,clip-l改成comfyUI 默认的,由此剔除掉sd15底模;
- 对于水印,mask_dilation_iter(遮罩膨胀系数)应适当调低,比如2,常规使用Propainter的采样也就够了(最大边是960,应用了再超分吧)
- use cofmyUI v3 mode,fix bugs,add new diffuser support,you can run 1280*720 (12GVRAM) now
- 修复不少bug,现在12G也能跑1280*720,DiffuEraser的sample 节点的 blend支持2种输出,关闭为降低闪烁,开启为使用合成,避免loop循环的反复加载模型
1. Installation
In the ./ComfyUI /custom_nodes directory, run the following:
git clone https://github.com/smthemex/ComfyUI_DiffuEraser.git
2. Requirements
- no need, because it's base in sd1.5 ,Perhaps someone may be missing the library.没什么特殊的库,懒得删了
pip install -r requirements.txt
3. Models
- vae links
- clip-l, comfyUI normal
- pcm 1.5 lora address pcm_sd15_smallcfg_2step_converted.safetensors #example
- ProPainter address # below example
- unet and brushnet address # below example
-- ComfyUI/models/vae
|-- sd-vae-ft-mse.safetensors #vae
-- ComfyUI/models/clip
|-- clip_l.safetensors # comfyUI normal
-- ComfyUI/models/DiffuEraservae
|--brushnet
|-- config.json
|-- diffusion_pytorch_model.safetensors
|--unet_main
|-- config.json
|-- diffusion_pytorch_model.safetensors
|--propainter
|-- ProPainter.pth
|-- raft-things.pth
|-- recurrent_flow_completion.pth
- If use video to mask #可以用RMBG或者BiRefNet模型脱底
-- any/path/briaai/RMBG-2.0 # or auto download
|--config.json
|--model.safetensors
|--birefnet.py
|--BiRefNet_config.py
Or
-- any/path/ZhengPeng7/BiRefNet # or auto download
|--config.json
|--model.safetensors
|--birefnet.py
|--BiRefNet_config.py
|--handler.py
4 Example

- use single mask

5.Citation
@misc{li2025diffueraserdiffusionmodelvideo,
title={DiffuEraser: A Diffusion Model for Video Inpainting},
author={Xiaowen Li and Haolan Xue and Peiran Ren and Liefeng Bo},
year={2025},
eprint={2501.10018},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2501.10018},
}
@inproceedings{zhou2023propainter,
title={{ProPainter}: Improving Propagation and Transformer for Video Inpainting},
author={Zhou, Shangchen and Li, Chongyi and Chan, Kelvin C.K and Loy, Chen Change},
booktitle={Proceedings of IEEE International Conference on Computer Vision (ICCV)},
year={2023}
}
@misc{ju2024brushnet,
title={BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion},
author={Xuan Ju and Xian Liu and Xintao Wang and Yuxuan Bian and Ying Shan and Qiang Xu},
year={2024},
eprint={2403.06976},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@article{BiRefNet,
title={Bilateral Reference for High-Resolution Dichotomous Image Segmentation},
author={Zheng, Peng and Gao, Dehong and Fan, Deng-Ping and Liu, Li and Laaksonen, Jorma and Ouyang, Wanli and Sebe, Nicu},
journal={CAAI Artificial Intelligence Research},
year={2024}
}