This node allows you to use ID_Animator, the zero shot video generation model
You can find ID_Animator in this link ID_Animator
1、ParlerTTS node:ComfyUI_ParlerTTS
2、Llama3_8B node:ComfyUI_Llama3_8B
3、HiDiffusion node:ComfyUI_HiDiffusion_Pro
4、ID_Animator node: ComfyUI_ID_Animator
5、StoryDiffusion node:ComfyUI_StoryDiffusion
6、Pops node:ComfyUI_Pops
7、stable-audio-open-1.0 node :ComfyUI_StableAudio_Open
8、GLM4 node:ComfyUI_ChatGLM_API
9、CustomNet node:ComfyUI_CustomNet
10、Pipeline_Tool node :ComfyUI_Pipeline_Tool
11、Pic2Story node :ComfyUI_Pic2Story
12、PBR_Maker node:ComfyUI_PBR_Maker
2024-06-15
1、修复animateddiff帧率上限为32的问题。感谢ShmuelRonen 的提醒
2、加入face_lora 及lora_adapter的条件控制,模型地址在下面的模型说明里。
3、加入diffuser 0.28.0以上版本的支持
--- 既往更新 Previous updates
1、输出改成单帧图像,方便接其他的视频合成节点,取消原作保存gif动画的选项。
2、新增模型加载菜单,逻辑上更清晰一些,你可以多放几个动作模型进“.. ComfyUI_ID_Animator/models/animatediff_models”目录
git https://github.com/smthemex/ComfyUI_ID_Animator.git
If the module is missing, please refer to the separate installation of the missing module in the "if miss module check this requirements.txt" file
如果缺失模块,请打开"if miss module check this requirements.txt",单独安装缺失的模块
3.1 dir.. ComfyUI_ID_Animator/models
3.2 dir.. ComfyUI_ID_Animator/models/animatediff_models
3.3 dir.. comfy/models/diffusers
3.4 dir.. comfy/models/checkpoints
3.5 dir.. ComfyUI_ID_Animator/models/image_encoder
3.6 dir.. ComfyUI_ID_Animator/models/adapter
3.7 other models
The first run will download the insightface models to the "X/user/username/.insightface/models/buffalo_l" directory
因为"ID_Animator"作者没有标注开源许可协议,所以我暂时把开源许可协议设置为Apache-2.0 license
Because "ID_Animator"does not indicate the open source license agreement, I have temporarily set the open source license agreement to Apache-2.0 license
Xuanhua He: [email protected]
Quande Liu: [email protected]
Shengju Qian: [email protected]
@article{guo2023animatediff,
title={AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning},
author={Guo, Yuwei and Yang, Ceyuan and Rao, Anyi and Liang, Zhengyang and Wang, Yaohui and Qiao, Yu and Agrawala, Maneesh and Lin, Dahua and Dai, Bo},
journal={International Conference on Learning Representations},
year={2024}
}
@article{guo2023sparsectrl,
title={SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models},
author={Guo, Yuwei and Yang, Ceyuan and Rao, Anyi and Agrawala, Maneesh and Lin, Dahua and Dai, Bo},
journal={arXiv preprint arXiv:2311.16933},
year={2023}
}