We developed a custom_node for Liveportrait_v3 that enables flexible use on Comfyui to drive image-based emoji generation from photos.
ComfyUI nodes for LivePortrait,We support image driven mode and regional control for Comfyui!!! Using a simple way to use an image as a driving signal to drive the source image or video!
This repo, named Comfyui_Liveportrait_v3, thanks to paper LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control. We developed a custom_node for Liveportrait_v3 that enables flexible use on Comfyui to drive image-based emoji generation from photos.
git clone https://github.com/VangengLab/ComfyUI-LivePortrait_v3.git
cd ComfyUI-LivePortrait_v3
For the environment, it can be configured according to Comfyui_liveportrait_v1 and liveportrait. For details, you can refer to https://github.com/KwaiVGI/LivePortrait and https://github.com/kijai/ComfyUI-LivePortraitKJ:https://github.com/KwaiVGI/LivePortrait. or refer to my environment and execute the instruction.
pip install -r requirements.txt
The easiest way to download the pretrained weights is from HuggingFace:
cd models/
huggingface-cli download KwaiVGI/LivePortrait --local-dir Liveportrait_v3 --exclude "*.git*" "README.md" "docs"
If you cannot access to Huggingface, you can use hf-mirror to download:
export HF_ENDPOINT=https://hf-mirror.com
huggingface-cli download KwaiVGI/LivePortrait --local-dir pretrained_weights --exclude "*.git*" "README.md" "docs"
You can also manually download the model to the folder from the URL: https://huggingface.co/KwaiVGI/LivePortrait/tree/main, and remember put them to models/Liveportrait_v3
āā
###maybe there will be some problem about locating models,please use the abusolte address,or tell me I will offer you help.my cuda is 12.1 which has less problem.