Steerable Motion is a ComfyUI node for batch creative interpolation. Our goal is to feature the best methods for steering motion with images as video models evolve.
Steerable Motion is a ComfyUI node for batch creative interpolation. Our goal is to feature the best quality and most precise and powerful methods for steering motion with images as video models evolve. This node is best used via Dough - a creative tool which simplifies the settings and provides a nice creative flow - or in Discord - by joining this channel.
The main settings are:
Other than image adherence which is set for the entire generation these are set linearly - the same for each frame - or dynamically - varying them for each frame - you can find detailed instructions on how to tweak these settings inside the workflow above.
Tweaking the settings can greatly influence the motion - for example, below you can see two examples of the same images animated - but with the one setting tweaked, the length of each frame's influence:
This isnβt a tool like text to video that will perform well out of the box, itβs more like a paint brush - an artistic tool that you need to figure out how to get the best from.
Through trial and error, you'll need to build an understanding of how the motion and settings work, what its limitations are, which inputs images work best with it, etc.
It won't work for everything but if you can figure out how to wield it, this approach can provide enough control for you to make beautiful things that match your imagination precisely.
Below are 5 basic workflows - each with their own weird and unique characteristics - all with differing levels of adherence and different types of motion - most of the changes come from tweaking the IPA configuration and switching out base models:
You can see each in acton below:
The workflows I share above are just basic examples of it in action - below are two other workflows people in our community have created on top of this node that leverage the same underlying mechanism in creative and interesting ways:
First, @idgallagher uses LCM and different settings to achieve a really interesting realistic motion effect. You can grab it here and see an example output here:
Next, Superbeasts.ai uses depth maps to control the motion in different layers - creating a smoother motion effect. You can grab this workflow here and see an example of it in action here:
I believe that that there are endless ways to expand upon and extend the ideas in this node - if you do anything cool, please share!
You're very welcome to drop into our Discord here.
This code draws heavily from Cubiq's IPAdapter_plus, while the workflow uses Kosinkadink's Animatediff Evolved and ComfyUI-Advanced-ControlNet, Fizzledorf's Fizznodes, Fannovel16's Frame Interpolation and more. Thanks to all and of course the Animatediff team, Controlnet, others, and of course our supportive community!