ComfyUI Node: Animate Anyone Sampler

Authored by MrForExample

Created

Updated

504 stars

Category

AnimateAnyone-Evolved

Inputs

reference_unet UNET2D
denoising_unet UNET3D
ref_image_latent LATENT
clip_image_embeds CLIP_VISION_OUTPUT
pose_latent POSE_LATENT
seed INT
steps INT
cfg FLOAT
delta FLOAT
context_frames INT
context_stride INT
context_overlap INT
context_batch_size INT
interpolation_factor INT
sampler_scheduler_pairs
  • DDIM
  • DPM++ 2M Karras
  • LCM
  • Euler
  • Euler Ancestral
  • LMS
  • PNDM
beta_start FLOAT
beta_end FLOAT
beta_schedule
  • linear
  • scaled_linear
  • squaredcos_cap_v2
prediction_type
  • v_prediction
  • epsilon
  • sample
timestep_spacing
  • trailing
  • linspace
  • leading
steps_offset INT
clip_sample BOOLEAN
rescale_betas_zero_snr BOOLEAN
use_lora BOOLEAN
lora_name

    Outputs

    LATENT

    Extension: ComfyUI-AnimateAnyone-Evolved

    Improved AnimateAnyone implementation that allows you to use the opse image sequence and reference image to generate stylized video. The current goal of this project is to achieve desired pose2video result with 1+FPS on GPUs that are equal to or better than RTX 3080!πŸš€ [w/The torch environment may be compromised due to version issues as some torch-related packages are being reinstalled.]

    Authored by MrForExample

    Run ComfyUI workflows in the Cloud!

    No downloads or installs are required. Pay only for active GPU usage, not idle time. No complex setups and dependency issues

    Learn more