IPAdapter Batch Unfold Workflow - HotShotXL
CIVITAI Link (I keep these workflows up to date usually): https://civitai.com/articles/3194
This is the workflow I used to make the base video. It uses hotshot/SDXL. Do see that I am starting at step 5 - meaning I am doing a partial denoise. However it still works from step 0. I think it helps with consistency and speeds up the run.
It requires a keyframe (ie. do a small AD run without IPadapter and use one image from that as the keyframe). The reason for this is that the Batch unfold IP adpater will push things to realism and this pushes back to anime (or whatever) style.
For the video I posed I had a 2nd upscaling workflow which is identical to this but resizes the image to 720p and starts at step 15 of 25.
If you decide to do 1.5 - I have found no keyframe is needed for at least anime model - likely they are more overtrained. So only the one IP adpater is needed.
I will make a self contained workflow but that will take me a while. I assume most here can figure out how to keyframe for now.
Do note that the loading of the unfolding node takes a lot of VRAM - if you get an OOM error (which I do for >100 frames) just allow for the smart memory allocation (ie. vram to ram - just hit prompt again and it will do this). Once the K sampler starts vram usage goes back down to normal.
https://discord.com/channels/1076117621407223829/1180247137628455043
https://civitai.com/articles/3194
Metadata
Groups
- Inputs
- Outputs
- ControlNet
- Prompt
- Animate Diff Nodes
- Video Reference IPAdapter
- Keyframe IPAdapter
Checkpoints
ComfyUI Nodes
- VAEEncode
- PreviewImage
- ADE_AnimateDiffLoaderWithContext
- MiDaS-DepthMapPreprocessor
- CLIPTextEncodeSDXL
- ControlNetApplyAdvanced
- ADE_AnimateDiffUniformContextOptions
- CheckpointLoaderSimple
- VAELoader
- IPAdapterApply
- PrepImageForClipVision
- KSamplerAdvanced
- ControlNetLoaderAdvanced
- ImageScale
- CLIPVisionLoader
- IPAdapterModelLoader
- LoadImage
- PrimitiveNode
- VAEDecode
- VHS_LoadVideo
- VHS_VideoCombine