ComfyUI Extension: ComfyUI-Ruyi

Authored by IamCreateAI

Created

Updated

260 stars

ComfyUI wrapper nodes for Ruyi, an image-to-video model by CreateAI.

Custom Nodes (0)

    README

    Ruyi-Models

    English | ็ฎ€ไฝ“ไธญๆ–‡

    Welcome to Ruyi-Models!

    Ruyi is an image-to-video model capable of generating cinematic-quality videos at a resolution of 768, with a frame rate of 24 frames per second, totaling 5 seconds and 120 frames. It supports lens control and motion amplitude control. Using a RTX 3090 or RTX 4090, you can generate 512 resolution, 120 frames (or 768 resolution, ~72 frames) videos without any loss of quality.

    Table of Contents

    Installation Instructions

    The installation instructions are simple. Just clone the repo and install the requirements.

    git clone https://github.com/IamCreateAI/Ruyi-Models
    cd Ruyi-Models
    pip install -r requirements.txt
    

    For ComfyUI Users

    Method (1): Installation via ComfyUI Manager

    Download and install ComfyUI-Manager.

    cd ComfyUI/custom_nodes/
    git clone https://github.com/ltdrdata/ComfyUI-Manager.git
    
    # install requirements
    pip install -r ComfyUI-Manager/requirements.txt
    

    Next, start ComfyUI and open the Manager. Select Custom Nodes Manager, then search for "Ruyi". You should see ComfyUI-Ruyi as shown in the screenshot below. Click "Install" to proceed.

    <div align=center> <img src="https://github.com/user-attachments/assets/10dda65f-13d5-4da8-9437-9c98b114536c"></img> </div>

    Finally, search for "ComfyUI-VideoHelperSuite" and install it as well.

    Method (2): Manual Installation

    Download and save this repository to the path ComfyUI/custom_nodes/Ruyi-Models.

    # download the repo
    cd ComfyUI/custom_nodes/
    git clone https://github.com/IamCreateAI/Ruyi-Models.git
    
    # install requirements
    pip install -r Ruyi-Models/requirements.txt
    

    Install the dependency ComfyUI-VideoHelperSuite to display video output (skip this step if already installed).

    # download ComfyUI-VideoHelperSuite
    cd ComfyUI/custom_nodes/
    git clone https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite.git
    
    # install requirements
    pip install -r ComfyUI-VideoHelperSuite/requirements.txt
    
    For Windows Users

    When using the Windows operating system, a common distribution is ComfyUI_windows_portable_nvidia. When launched with run_nvidia_gpu.bat, it utilizes the embedded Python interpreter included with the package. Therefore, the environment needs to be set up within this built-in Python.

    For example, if the extracted directory of the distribution is ComfyUI_windows_portable, you can typically use the following command to download the repository and install the runtime environment:

    # download the repo
    cd ComfyUI_windows_portable\ComfyUI\custom_nodes
    git clone https://github.com/IamCreateAI/Ruyi-Models.git
    
    # install requirements using embedded Python interpreter
    ..\..\python_embeded\python.exe -m pip install -r Ruyi-Models\requirements.txt
    

    Download Model (Optional)

    Download the model and save it to certain path. To directly run our model, it is recommand to save the models into Ruyi-Models/models folder. For ComfyUI users, the path should be ComfyUI/models/Ruyi.

    | Model Name | Type | Resolution | Max Frames | Frames per Second | Storage Space | Download | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Ruyi-Mini-7B | Image to Video | 512 & 768 | 120 | 24 | 17 GB | ๐Ÿค— |

    For example, after downloading Ruyi-Mini-7B, the file path structure should be:

    ๐Ÿ“ฆ Ruyi-Models/models/ or ComfyUI/models/Ruyi/
    โ”œโ”€โ”€ ๐Ÿ“‚ Ruyi-Mini-7B/
    โ”‚   โ”œโ”€โ”€ ๐Ÿ“‚ transformers/
    โ”‚   โ”œโ”€โ”€ ๐Ÿ“‚ vae/
    โ”‚   โ””โ”€โ”€ ๐Ÿ“‚ ...
    

    This repository supports automatic model downloading, but manual downloading provides more control. For instance, you can download the model to another location and then link it to the ComfyUI/models/Ruyi path using symbolic links or similar methods.

    How to Use

    We provide two ways to run our model. The first is directly using python code.

    python3 predict_i2v.py
    

    Specifically, the script downloads the model to the Ruyi-Models/models folder and uses images from the assets folder as the start and end frames for video inference. You can modify the variables in the script to replace the input images and set parameters such as video length and resolution.

    For users with more than 24GB of GPU memory, you can use predict_i2v_80g.py to enhance generation speed. For those with less GPU memory, we offer parameters to optimize memory usage, enabling the generation of higher resolution and longer videos by extending the inference time. The effects of these parameters can be found in the GPU memory optimization section section below.

    Or use ComfyUI wrapper in our github repo, the detail of ComfyUI nodes is described in comfyui/README.md.

    Showcase

    Image to Video Effects

    <table> <tr> <td><video src="https://github.com/user-attachments/assets/4dedf40b-82f2-454c-9a67-5f4ed243f5ea"></video></td> <td><video src="https://github.com/user-attachments/assets/905fef17-8c5d-49b0-a49a-6ae7e212fa07"></video></td> <td><video src="https://github.com/user-attachments/assets/20daab12-b510-448a-9491-389d7bdbbf2e"></video></td> <td><video src="https://github.com/user-attachments/assets/f1bb0a91-d52a-4611-bac2-8fcf9658cac0"></video></td> </tr> </table>

    Camera Control

    <table> <tr> <td align=center><img src="https://github.com/user-attachments/assets/8aedcea6-3b8e-4c8b-9fed-9ceca4d41954" height=200></img>input</td> <td align=center><video src="https://github.com/user-attachments/assets/d9d027d4-0d4f-45f5-9d46-49860b562c69"></video>left</td> <td align=center><video src="https://github.com/user-attachments/assets/7716a67b-1bb8-4d44-b128-346cbc35e4ee"></video>right</td> </tr> <tr> <td align=center><video src="https://github.com/user-attachments/assets/cc1f1928-cab7-4c4b-90af-928936102e66"></video>static</td> <td align=center><video src="https://github.com/user-attachments/assets/c742ea2c-503a-454f-a61a-10b539100cd9"></video>up</td> <td align=center><video src="https://github.com/user-attachments/assets/442839fa-cc53-4b75-b015-909e44c065e0"></video>down</td> </tr> </table>

    Motion Amplitude Control

    <table> <tr> <td align=center><video src="https://github.com/user-attachments/assets/0020bd54-0ff6-46ad-91ee-d9f0df013772"></video>motion 1</td> <td align=center><video src="https://github.com/user-attachments/assets/d1c26419-54e3-4b86-8ae3-98e12de3022e"></video>motion 2</td> <td align=center><video src="https://github.com/user-attachments/assets/535147a2-049a-4afc-8d2a-017bc778977e"></video>motion 3</td> <td align=center><video src="https://github.com/user-attachments/assets/bf893d53-2e11-406f-bb9a-2aacffcecd44"></video>motion 4</td> </tr> </table>

    GPU Memory Optimization

    We provide the options GPU_memory_mode and GPU_offload_steps to reduce GPU memory usage, catering to different user needs.

    Generally speaking, using less GPU memory requires more RAM and results in longer generation times. Below is a reference table of expected GPU memory usage and generation times. Note that, the GPU memory reported below is the max_memory_allocated() value. The values read from nvidia-smi may be higher than the reported values because CUDA occupies some GPU memory (usually between 500 - 800 MiB), and PyTorch's caching mechanism also requests additional GPU memory.

    A100 Results

    • Resolution of 512

    | Num frames | normal_mode + 0 steps | normal_mode + 10 steps | normal_mode + 7 steps | normal_mode + 5 steps | normal_mode + 1 steps | low_gpu_mode + 0 steps | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | 24 frames | 16119MiB <br> 01:01s | 15535MiB <br> 01:07s | 15340MiB <br> 01:13s | 15210MiB <br> 01:20s | 14950MiB <br> 01:32s | 4216MiB <br> 05:14s | | 48 frames | 18398MiB <br> 01:53s | 17230MiB <br> 02:15s | 16840MiB <br> 02:29s | 16580MiB <br> 02:32s | 16060MiB <br> 02:54s | 4590MiB <br> 09:59s | | 72 frames | 20678MiB <br> 03:00s | 18925MiB <br> 03:31s | 18340MiB <br> 03:53s | 17951MiB <br> 03:57s | 17171MiB <br> 04:25s | 6870MiB <br> 14:42s | | 96 frames | 22958MiB <br> 04:11s | 20620MiB <br> 04:54s | 19841MiB <br> 05:10s | 19321MiB <br> 05:14s | 18281MiB <br> 05:47s | 9150MiB <br> 19:17s | | 120 frames | 25238MiB <br> 05:42s | 22315MiB <br> 06:34s | 21341MiB <br> 06:59s | 20691MiB <br> 07:07s | 19392MiB <br> 07:41s | 11430MiB <br> 24:08s |

    • Resolution of 768

    | Num frames | normal_mode + 0 steps | normal_mode + 10 steps | normal_mode + 7 steps | normal_mode + 5 steps | normal_mode + 1 steps | low_gpu_mode + 0 steps | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | 24 frames | 18971MiB <br> 02:06s | 17655MiB <br> 02:40s | 17217MiB <br> 02:39s | 16925MiB <br> 02:41s | 16339MiB <br> 03:13s | 5162MiB <br> 13:42s | | 48 frames | 24101MiB <br> 04:52s | 21469MiB <br> 05:44s | 20592MiB <br> 05:51s | 20008MiB <br> 06:00s | 18837MiB <br> 06:49s | 10292MiB <br> 20:58s | | 72 frames | 29230MiB <br> 08:24s | 25283MiB <br> 09:45s | 25283MiB <br> 09:45s | 23091MiB <br> 10:10s | 21335MiB <br> 11:10s | 15421MiB <br> 39:12s | | 96 frames | 34360MiB <br> 12:49s | 29097MiB <br> 14:41s | 27343MiB <br> 15:33s | 26174MiB <br> 15:44s | 23834MiB <br> 16:33s | 20550MiB <br> 43:47s | | 120 frames | 39489MiB <br> 18:21s | 32911MiB <br> 20:39s | 30719MiB <br> 21:34s | 29257MiB <br> 21:48s | 26332MiB <br> 23:02s | 25679MiB <br> 63:01s |

    RTX 4090 Results

    The values marked with --- in the table indicate that an out-of-memory (OOM) error occurred, preventing generation.

    • Resolution of 512

    | Num frames | normal_mode + 0 steps | normal_mode + 10 steps | normal_mode + 7 steps | normal_mode + 5 steps | normal_mode + 1 steps | low_gpu_mode + 0 steps | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | 24 frames | 16366MiB <br> 01:18s | 15805MiB <br> 01:26s | 15607MiB <br> 01:37s | 15475MiB <br> 01:36s | 15211MiB <br> 01:39s | 4211MiB <br> 03:57s | | 48 frames | 18720MiB <br> 02:21s | 17532MiB <br> 02:49s | 17136MiB <br> 02:55s | 16872MiB <br> 02:58s | 16344MiB <br> 03:01s | 4666MiB <br> 05:01s | | 72 frames | 21036MiB <br> 03:41s | 19254MiB <br> 04:25s | 18660MiB <br> 04:34s | 18264MiB <br> 04:36s | 17472MiB <br> 04:51s | 6981MiB <br> 06:36s | | 96 frames | -----MiB <br> --:--s | 20972MiB <br> 06:18s | 20180MiB <br> 06:24s | 19652MiB <br> 06:36s | 18596MiB <br> 06:56s | 9298MiB <br> 10:03s | | 120 frames | -----MiB <br> --:--s | -----MiB <br> --:--s | 21704MiB <br> 08:50s | 21044MiB <br> 08:53s | 19724MiB <br> 09:08s | 11613MiB <br> 13:57s |

    • Resolution of 768

    | Num frames | normal_mode + 0 steps | normal_mode + 10 steps | normal_mode + 7 steps | normal_mode + 5 steps | normal_mode + 1 steps | low_gpu_mode + 0 steps | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | 24 frames | 19223MiB <br> 02:38s | 17900MiB <br> 03:06s | 17448MiB <br> 03:18s | 17153MiB <br> 03:23s | 16624MiB <br> 03:34s | 5251MiB <br> 05:54s | | 48 frames | -----MiB <br> --:--s | -----MiB <br> --:--s | 20946MiB <br> 07:28s | 20352MiB <br> 07:35s | 19164MiB <br> 08:04s | 10457MiB <br> 10:55s | | 72 frames | -----MiB <br> --:--s | -----MiB <br> --:--s | -----MiB <br> --:--s | -----MiB <br> --:--s | -----MiB <br> --:--s | 15671MiB <br> 18:52s |

    License

    Weโ€™re releasing the model under a permissive Apache 2.0 license.

    BibTeX

    @misc{createai2024ruyi,
          title={Ruyi-Mini-7B},
          author={CreateAI Team},
          year={2024},
          publisher = {GitHub},
          journal = {GitHub repository},
          howpublished={\url{https://github.com/IamCreateAI/Ruyi-Models}}
    }
    

    Welcome Feedback and Collaborative Optimization

    We sincerely welcome everyone to actively provide valuable feedback and suggestions, and we hope to work together to optimize our services and products. Your words will help us better understand user needs, allowing us to continuously enhance the user experience. Thank you for your support and attention to our work!

    You are welcomed to join our Discord or Wechat Group (Scan QR code) for further discussion!

    wechat