WarpFusion workflow wrapper for ComfyUI
WarpFusion Custom Nodes for ComfyUI
Sample workflows are located in ComfyWarp\sample_workflow\
Input a path to extracted frames
start_frame - frame to begin with end_frame - frame to end with nth_frame - n-th frame to extract
If you provide a path to a video, only a range of n-th frames between start_frame and end_frame will be extracted. If you provide a folder or a glob pattern, only a range of n-th frames between start_frame and end_frame will be stored in the dataset.
Loads a frame from the frame folder
Input 2 frames, get optical flow between them, and consistency masks
Applies optical flow to a frame
Used to load rendered frames from the output folder for loopback. Loads frame from folder. Updates frame list on each call. If there are no frames in the folder, returns init_image.
Returns 2 frames: current and previous.
Resize an image to fit, keeping the aspect ratio.
Save frame to a folder, using current frame number.
Mix consistency maps, blur, dilate.
Trigger output video render at a given frame
FlowBlend Deflickering pipeline from warp's video output cell
This part smooths out the frames by blending the current stylized output frame and previously stylized and warped output frame, where consistent areas are blended linearly, and inconsistent areas are taken from the current stylized output frame only. This smooths non-moving parts and helps reduce trails on moving parts.
Renders a video from frames following the {frames_folder}{batch_name}%06d.png pattern.
output_dir
:
folder to put the rendered video to. Will be created automatically
frames_input_dir
:
folder to get frames from, plug your SaveFrame output dir here
batch_name
:
batch name you've set in your SaveFrame node, default - ComfyWarp
first_frame
:
start video from that frame, default: 0
last_frame
:
end video at that frame, default: -1, means use all available frames
render_at_frame
:
frame at which to begin rendering video. Plug your FrameDataset total frame number here or specify manually if you want to render video before diffusing all the frames.
current_frame
:
current frame being rendered, used to trigger video render. don't enter manually, Plug your current frame variable here.
fps
:
output fps
output_format
:
codec to use, h264_mp4, qtrle_mov, prores_mov
use_deflicker
:
enable ffmpeg built-in deflicker
Added Scheduler nodes, by output variable type: string
, int
, float
.
Can be used with any ComfyUI node inputs. For example, for automating parameter testing, scheduling controlnet weights, sampler settings, and prompts.
Require a current frame input to drive the scheduled sampling. Accepts any int as current frame input.
Input formats:
value
[frame1_value, frame_2value, ..., frameN_value]
{0: frame1_value, 1: frame2_value, ... N: frameN_Value}
The dictionary format supports interpolation of values between frames, just like in WarpFusion.start - frame to begin with end - frame to end with current_frame - iterator, showing the current frame, which is being output as the current value. The current value should be plugged into downstream nodes as the source of the current frame number.
Queue Many button
Click to render a set of frames from start to end. Plug end output to your RenderVideo node's render_at_frame input to automatically render video after finishing the end frame.
Works like its WarpFusion counterpart. blends previously stylized and warped frame (with cc mask applied) with the corresponding raw video frame. Acts like style opacity. 0 - no style, only raw frame, 1 - only stylized frame, intermediary values - linear interpolation between raw and stylized frame.