ComfyUI Extension: ComfyUI-LightVAE

Authored by ModelTC

Created

Updated

15 stars

ComfyUI-LightVAE is a collection of LightX2V VAE custom nodes designed for ComfyUI, supporting high-performance video VAE models including LightVAE and LightTAE.

Custom Nodes (0)

    README

    ComfyUI-LightVAE

    <div align="center">

    LightX2V

    High-Performance VAE Custom Nodes

    HuggingFace GitHub License

    English | įŽ€äŊ“中文

    </div>

    📖 Introduction

    ComfyUI-LightVAE is a collection of LightX2V VAE custom nodes designed for ComfyUI, supporting high-performance video VAE models including LightVAE and LightTAE.

    The LightX2V team has deeply optimized VAE, creating two major series: LightVAE and LightTAE, which significantly reduce memory usage and improve inference speed while maintaining high quality.

    ✨ Key Features

    <table> <tr> <td width="50%">

    đŸŽ¯ LightVAE Series

    Feature: Best Balance âš–ī¸

    • ✅ Uses Causal 3D Conv (same as official)
    • ✅ Near-official quality ⭐⭐⭐⭐
    • ✅ ~50% less memory (~4-5 GB)
    • ✅ 2-3x faster
    • ✅ Balances quality, speed, and memory 🏆
    </td> <td width="50%">

    ⚡ LightTAE Series

    Feature: Ultra-fast + High Quality 🏆

    • ✅ Minimal memory usage (~0.4 GB)
    • ✅ Lightning-fast inference
    • ✅ Near-official quality ⭐⭐⭐⭐
    • ✅ Surpasses open-source TAE
    </td> </tr> </table>

    🚀 Performance Comparison

    Test Environment: H100 GPU, BF16, 81-frame video (480P)

    | Model | Encode Time | Decode Time | Encode Memory | Decode Memory | Quality | |:------|:------------|:------------|:--------------|:--------------|:--------| | lightvaew2_1 | 1.5s | 2.1s | 4.8GB | 5.6GB | ⭐⭐⭐⭐⭐ | | lighttaew2_1 | 0.4s | 0.25s | 0.009GB | 0.4GB | ⭐⭐⭐⭐ | | Wan2.1_VAE | 4.2s | 5.5s | 8.5GB | 10.1GB | ⭐⭐⭐⭐ | | taew2_1 | 0.4s | 0.25s | 0.009GB | 0.4GB | ⭐⭐⭐ |

    Performance Improvements:

    • 🚀 LightVAE is 2-3x faster than official VAE, 50% less memory
    • ⚡ LightTAE is 10+ times faster than official VAE, 95%+ less memory
    • 🎨 Near-official VAE quality, surpasses open-source TAE

    đŸ“Ļ Installation

    1. Install LightX2V Dependencies

    # Clone LightX2V repository
    git clone https://github.com/ModelTC/LightX2V
    cd LightX2V
    
    python setup_vae.py install
    

    2. Install ComfyUI-WanVideoWrapper

    LightVAE nodes depend on WanVideoWrapper for main model support:

    cd ComfyUI/custom_nodes
    git clone https://github.com/kijai/ComfyUI-WanVideoWrapper
    

    3. Install ComfyUI-LightVAE

    cd ComfyUI/custom_nodes
    git clone https://github.com/YOUR_USERNAME/ComfyUI-LightVAE
    

    4. Restart ComfyUI

    đŸ“Ĩ Download Models

    Main Models (Diffusion Models)

    Option 1: Distilled Models (Recommended, 4-step)

    Option 2: Original Models (20-step)

    # Download to ComfyUI/models/diffusion_models/
    huggingface-cli download lightx2v/Wan2.1/2-Distill-Models \
        --local-dir ./ComfyUI/models/diffusion_models/
    

    VAE Models

    All VAE Models (Required):

    # Download all VAE models
    huggingface-cli download lightx2v/Autoencoders \
        --local-dir ./ComfyUI/models/vae/
    
    # Or download only what you need (Recommended)
    huggingface-cli download lightx2v/Autoencoders lightvaew2_1.pth \
        --local-dir ./ComfyUI/models/vae/
    

    Supported VAE Models:

    • Wan2.1_VAE.pth / .safetensors - Official VAE 2.1
    • Wan2.2_VAE.pth / .safetensors - Official VAE 2.2
    • lightvaew2_1.pth / .safetensors - Optimized VAE 2.1 ⭐ Recommended
    • taew2_1.pth / .safetensors - Open-source TAE 2.1
    • taew2_2.pth / .safetensors - Open-source TAE 2.2
    • lighttaew2_1.pth / .safetensors - Optimized TAE 2.1 ⚡ Fastest
    • lighttaew2_2.pth / .safetensors - Optimized TAE 2.2

    đŸŽ¯ Node Documentation

    1. LightX2V VAE Decoder Loader

    VAE Loader

    Input Parameters:

    • vae_filename - VAE model filename (automatically lists from ./models/vae/)
    • dtype - Data type (bfloat16 / float16 / float32)
    • device - Compute device (cuda / cpu)

    Output:

    • vae - VAE model object

    Features:

    • ✅ Automatically identifies VAE type from filename
    • ✅ Supports all LightX2V VAE models

    2. LightX2V VAE Decode

    VAE Decode

    Input Parameters:

    • vae - VAE object from Loader
    • latent - Latent representation

    Output:

    • IMAGE - Decoded video frames

    Supports:

    • ✅ All VAE series (WanVAE, LightVAE)
    • ✅ All TAE series (TAE, LightTAE)

    đŸ–ŧī¸ Example Workflows

    Wan2.1 I2V 4-step FP8 + LightVAE

    High-performance configuration using 4-step distilled model + LightVAE optimized decoder.

    Workflow File: example/workflows/wan2.1_I2V_4step_fp8_lightvae.json

    Wan2.2 TI2V + LightVAE

    Wan2.2 Text-Image-to-Video + LightVAE decoding.

    Workflow File: example/workflows/wan2.2_TI2V_lightvae.json

    âš ī¸ Important Notes

    Model Compatibility

    • âš ī¸ Wan2.1 VAE can only be used with Wan2.1/Wan2.2-A1B backbone models
    • âš ī¸ Wan2.2 VAE can only be used with Wan2.2 TI2V backbone models
    • ❌ Do not mix different versions of VAE and backbone models

    📚 Related Resources

    • Project Homepage: https://github.com/ModelTC/LightX2V
    • VAE Models: https://huggingface.co/lightx2v/Autoencoders
    • Video Generation Models: https://huggingface.co/lightx2v/
    • ComfyUI-WanVideoWrapper: https://github.com/kijai/ComfyUI-WanVideoWrapper
    • TAE Series Models: https://github.com/madebyollin/taesd
    • Wan-AI: https://huggingface.co/Wan-AI

    🙏 Acknowledgements

    If this project helps you, please give a ⭐ to LightX2V and this repository!

    📞 Support

    • GitHub Issues: Issues page of this repository
    • LightX2V Issues: https://github.com/ModelTC/LightX2V/issues
    • HuggingFace: https://huggingface.co/lightx2v
    <div align="center">

    Enjoy using LightX2V VAE! 🚀

    </div>