ComfyUI Extension: ComfyUI_ID_Animator

Authored by smthemex

Created

Updated

24 stars

This node allows you to use ID_Animator, the zero shot video generation model

Custom Nodes (0)

    README

    A node using ID_Animator in comfyUI

    NOTICE

    You can find ID_Animator in this link ID_Animator

    My ComfyUI node list:

    1、ParlerTTS node:ComfyUI_ParlerTTS

    2、Llama3_8B node:ComfyUI_Llama3_8B

    3、HiDiffusion node:ComfyUI_HiDiffusion_Pro

    4、ID_Animator node: ComfyUI_ID_Animator

    5、StoryDiffusion node:ComfyUI_StoryDiffusion

    6、Pops node:ComfyUI_Pops

    7、stable-audio-open-1.0 node :ComfyUI_StableAudio_Open

    8、GLM4 node:ComfyUI_ChatGLM_API

    9、CustomNet node:ComfyUI_CustomNet

    10、Pipeline_Tool node :ComfyUI_Pipeline_Tool

    11、Pic2Story node :ComfyUI_Pic2Story

    12、PBR_Maker node:ComfyUI_PBR_Maker

    Update

    2024-06-15

    1、修复animateddiff帧率上限为32的问题。感谢ShmuelRonen 的提醒
    2、加入face_lora 及lora_adapter的条件控制,模型地址在下面的模型说明里。
    3、加入diffuser 0.28.0以上版本的支持

    1. Fix the issue of animateddiff with a maximum frame rate of 32. Thank you for ShmuelRonen 's reminder
    2. Add conditional control for "face_lora" and "lora-adapter", and the model address is provided in the model description below.
    3. . Add support for diffuser versions 0.28.0 and above

    --- 既往更新 Previous updates

    1、输出改成单帧图像,方便接其他的视频合成节点,取消原作保存gif动画的选项。
    2、新增模型加载菜单,逻辑上更清晰一些,你可以多放几个动作模型进“.. ComfyUI_ID_Animator/models/animatediff_models”目录

    1. Change the output to a single frame image for easy access to other video synthesis nodes, and remove the option to save the original GIF animation.
    2. Add a new model loading menu to make the logic clearer. You can add a few more action models to the ".. ComfyUI-ID-Animator/models/animateddiff_models" directory

    1.Installation 安装

     git https://github.com/smthemex/ComfyUI_ID_Animator.git
    

    2 Dependencies 需求库

    If the module is missing, please refer to the separate installation of the missing module in the "if miss module check this requirements.txt" file

    如果缺失模块,请打开"if miss module check this requirements.txt",单独安装缺失的模块

    3 Download the checkpoints 下载模型

    3.1 dir.. ComfyUI_ID_Animator/models

    • Download ID-Animator checkpoint:"animator.ckpt" link

    3.2 dir.. ComfyUI_ID_Animator/models/animatediff_models

    • Download AnimateDiff checkpoint like "/mm_sd_v15_v2.ckpt" link

    3.3 dir.. comfy/models/diffusers

    • Download Stable Diffusion V1.5 all files link
    • or
    • Download Stable Diffusion V1.5 most files link

    3.4 dir.. comfy/models/checkpoints

    • Download "realisticVisionV60B1_v51VAE.safetensors" link
      or any other dreambooth models

    3.5 dir.. ComfyUI_ID_Animator/models/image_encoder

    • Download CLIP Image encoder link

    3.6 dir.. ComfyUI_ID_Animator/models/adapter

    • Download "v3_sd15_adapter.ckpt" link

    3.7 other models
    The first run will download the insightface models to the "X/user/username/.insightface/models/buffalo_l" directory

    4 other 其他

    因为"ID_Animator"作者没有标注开源许可协议,所以我暂时把开源许可协议设置为Apache-2.0 license
    Because "ID_Animator"does not indicate the open source license agreement, I have temporarily set the open source license agreement to Apache-2.0 license

    5 example 示例

    6 Contact "ID_Animator"

    Xuanhua He: [email protected]

    Quande Liu: [email protected]

    Shengju Qian: [email protected]

    AnimateDif

    @article{guo2023animatediff,
      title={AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning},
      author={Guo, Yuwei and Yang, Ceyuan and Rao, Anyi and Liang, Zhengyang and Wang, Yaohui and Qiao, Yu and Agrawala, Maneesh and Lin, Dahua and Dai, Bo},
      journal={International Conference on Learning Representations},
      year={2024}
    }
    
    @article{guo2023sparsectrl,
      title={SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models},
      author={Guo, Yuwei and Yang, Ceyuan and Rao, Anyi and Agrawala, Maneesh and Lin, Dahua and Dai, Bo},
      journal={arXiv preprint arXiv:2311.16933},
      year={2023}
    }