ComfyUI Extension: ComfyUI-PersonaLive
A ComfyUI custom node implementation of PersonaLive: Expressive Portrait Image Animation for Live Streaming, enabling portrait animation driven by reference images. (Description by CC)
Custom Nodes (0)
README
ComfyUI-PersonaLive

This is a ComfyUI custom node implementation of PersonaLive: Expressive Portrait Image Animation for Live Streaming.
[!NOTE] Currently, this implementation only supports Image Input (driving the portrait with a single reference image). Video driving support is planned for future updates.
Original Repository: GVCLab/PersonaLive
Paper: ArXiv 2512.11253
I deeply appreciate the authors Zhiyuan Li, Chi-Man Pun, Chen Fang, Jue Wang, and Xiaodong Cun for their amazing work and for sharing their code.
๐ Installation
-
Clone this repository into your
ComfyUI/custom_nodes/directory:cd ComfyUI/custom_nodes/ git clone https://github.com/okdalto/ComfyUI-PersonaLive cd ComfyUI-PersonaLive pip install -r requirements.txt -
Model Setup:
Option 1: Automatic Download (Recommended)
Models will be automatically downloaded from Hugging Face when you first use the
PersonaLiveCheckpointLoadernode. The node will:-
Detect missing models in your selected directory
-
Download required models (~15-20GB total):
lambdalabs/sd-image-variations-diffusers(Base Model)stabilityai/sd-vae-ft-mse(VAE)huaichang/PersonaLive(PersonaLive Weights)
-
Organize them into the correct structure automatically
[!NOTE] The first download will take some time depending on your internet speed. Models are cached locally, so subsequent loads will be instant.
Option 2: Manual Download
If you prefer to download models manually or have connectivity issues:
Create a folder named
persona_liveinside yourComfyUI/models/directory with the following structure:ComfyUI/models/ โโโ persona_live/ โโโ sd-image-variations-diffusers/ <-- Base Model โโโ sd-vae-ft-mse/ <-- VAE โโโ persona_live/ <-- PersonaLive Repository โโโ pretrained_weights/ <-- .pth files location โโโ personalive/ <-- .pth files location โโโ denoising_unet.pth โโโ motion_encoder.pth โโโ motion_extractor.pth โโโ pose_guider.pth โโโ reference_unet.pth โโโ temporal_module.pth- Download Base Models:
- Download PersonaLive Weights:
- Clone or download the entire repository from Hugging Face
-
๐ Usage
- PersonaLiveCheckpointLoader: Select the
model_dir(e.g.,persona_live) that contains all your models. - PersonaLivePhotoSampler:
- Connect the pipeline from the loader.
- Connect
ref_image(source portrait) anddriving_image(pose reference). - Set
widthandheight(default 512). The node automatically resizes inputs to this resolution for processing and then restores the original resolution for the output.
Usage Tips
- Input Images: It is highly recommended to use square images (1:1 aspect ratio) for both
ref_imageanddriving_imageto ensure the best face alignment and generation quality. - Inference Steps: The model is optimized for 4 steps. If you increase this value, ensure it is a multiple of 4 (e.g., 8, 12, 16) to prevent errors.
๐งช Example Workflow
An example workflow is provided in the example folder. You can drag and drop the .json file from there into ComfyUI to get started quickly.
๐ To-Do
- [ ] Support Video Input (Driving Video)
โค๏ธ Acknowledgements
This project is simply a ComfyUI wrapper. All credit for the underlying technology and model architecture goes to the original authors of PersonaLive and the projects they built upon (Moore-AnimateAnyone, X-NeMo, StreamDiffusion, RAIN, LivePortrait).