ComfyUI workflow for VEnhancer Inference
ComfyUI extension for VEnhancer: A powerful video enhancement model that supports spatial super-resolution, temporal interpolation, and AI-guided refinement.
Features • Installation • Quick Start • Documentation •
</div>🎥 High-Quality Video Enhancement
🚀 Flexible Processing Options
🛠️ ComfyUI Integration
cd ComfyUI/custom_nodes/
git clone https://github.com/vikramxD/VEnhancer-ComfyUI-Wrapper
cd venhancer-comfyui
uv pip install setuptools
uv pip install -e . --no-build-isolation
from venhancer_comfyui.nodes import (
VideoLoader,
SingleGPUVEnhancerLoader,
SingleGPUInference,
SingleGPUSaver
)
# Load video
video = VideoLoader().load_video("input.mp4")
# Initialize model
model = SingleGPUVEnhancerLoader().load_model(
version="v2",
solver_mode="fast"
)
# Enhance video
enhanced = SingleGPUInference().enhance_video(
model=model,
video=video,
prompt="Enhance video quality with cinematic style",
up_scale=4.0,
target_fps=24
)
# Save result
SingleGPUSaver().save_video(enhanced, "enhanced.mp4")
| Model | Description | Download | |-------|-------------|----------| | v1 (paper) | Creative enhancement with strong refinement | Download | | v2 | Better texture preservation and identity consistency | Download |
{
"up_scale": 4.0, # Spatial upscaling (1.0-8.0)
"target_fps": 24, # Target frame rate (8-60)
"noise_aug": 200, # Refinement strength (50-300)
"solver_mode": "fast" # "fast" (15 steps) or "normal"
}
{
"version": "v2", # Model version (v1/v2)
"guide_scale": 7.5, # Text guidance strength
"s_cond": 8.0, # Conditioning strength
"steps": 15 # Inference steps (fast mode)
}
Common issues and solutions:
CUDA Out of Memory
up_scale
valueSlow Processing
solver_mode="fast"
We welcome contributions! Please see our Contributing Guidelines for details.
This project is licensed under the MIT License - see the LICENSE file for details.
Based on VEnhancer by Jingwen He et al. If you use this extension in your research, please cite:
@article{he2024venhancer,
title={VEnhancer: Generative Space-Time Enhancement for Video Generation},
author={He, Jingwen and Xue, Tianfan and Liu, Dongyang and Lin, Xinqi and
Gao, Peng and Lin, Dahua and Qiao, Yu and Ouyang, Wanli and Liu, Ziwei},
journal={arXiv preprint arXiv:2407.07667},
year={2024}
}