Improved AnimateAnyone implementation that allows you to use the opse image sequence and reference image to generate stylized video. The current goal of this project is to achieve desired pose2video result with 1+FPS on GPUs that are equal to or better than RTX 3080!๐ [w/The torch environment may be compromised due to version issues as some torch-related packages are being reinstalled.]
Improved AnimateAnyone implementation that allows you to use the opse image sequence and reference image to generate stylized video.<br> The current goal of this project is to achieve desired pose2video result with 1+FPS on GPUs that are equal to or better than RTX 3080!๐
<video controls autoplay loop src="https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved/assets/62230687/572eaa8d-6011-42dc-9ac5-9bbd86e4ac9d" muted="false"></video>
steps=20
, context_frames=24
; Takes 835.67 seconds to generate on a RTX3080 GPU
<br><video controls autoplay loop src="https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved/assets/62230687/4e5f6b80-88a7-4bf8-9c81-7a00b5a02c76" muted="false" width="320"></video>steps=20
, context_frames=12
; Takes 425.65 seconds to generate on a RTX3080 GPUsteps=20
, context_frames=12
; Takes 407.48 seconds to generate on a RTX3080 GPU
<br><video controls autoplay loop src="https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved/assets/62230687/45c6aaeb-b750-4d44-8c31-edbdcf1068d8" muted="false" width="320"></video>steps=20
, context_frames=24
; Takes 606.56 seconds to generate on a RTX3080 GPU
<br><video controls autoplay loop src="https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved/assets/62230687/e8c712ec-fc7f-4679-ae41-99449f4f76aa" muted="false" width="320"></video>steps=20
, context_frames=12
; Takes 450.66 seconds to generate on a RTX3080 GPU
<br><video controls autoplay loop src="https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved/assets/62230687/6a5b7c28-943d-4ff2-83de-3460ab1a6b61" muted="false" width="320"></video>context_frames=24
context_frames
, which does not correlate to the length of pose image sequences.Your ComfyUI root directory\ComfyUI\custom_nodes\
and install dependent Python packages:
cd Your_ComfyUI_root_directory\ComfyUI\custom_nodes\
git clone https://github.com/MrForExample/ComfyUI-AnimateAnyone-Evolved.git
pip install -r requirements.txt
# If you got error regards diffusers then run:
pip install --force-reinstall diffusers>=0.26.1
./pretrained_weights/
|-- denoising_unet.pth
|-- motion_module.pth
|-- pose_guider.pth
|-- reference_unet.pth
`-- stable-diffusion-v1-5
|-- feature_extractor
| `-- preprocessor_config.json
|-- model_index.json
|-- unet
| |-- config.json
| `-- diffusion_pytorch_model.bin
`-- v1-inference.yaml
Your_ComfyUI_root_directory\ComfyUI\models\clip_vision
Your_ComfyUI_root_directory\ComfyUI\models\vae