ComfyUI Extension: ComfyUI-RadialAttn
RadialAttention in ComfyUI native workflow
Custom Nodes (0)
README
ComfyUI-RadialAttn
This repo is RadialAttention ported to ComfyUI native workflows. If you're using kijai's ComfyUI-WanVideoWrapper rather than native workflows, then you can use their WanVideoSetRadialAttention
node rather than this repo, and you still need to install the pip packages below.
This supports Wan 2.1 and 2.2 14B, both T2V and I2V.
Installation
Here I list all the Windows wheels. On Linux I guess you know what to do.
- Install triton-windows
- Install SageAttention
- Install SpargeAttention
- Install flashinfer-windows
- Currently FlashInfer only supports PyTorch 2.6, but it's mostly a placeholder for the purpose of this repo. If you're using another version of PyTorch, you can run:
pip install --no-deps .\flashinfer_python-0.2.8-cp39-abi3-win_amd64.whl pip install cuda-python einops ninja numpy pynvml requests
- git clone this repo to your
ComfyUI/custom_nodes/
Usage
Just connect your model to the PatchRadialAttn
node. There's an example workflow for Wan 2.2 14B I2V + GGUF + LightX2V LoRA + RadialAttn + torch.compile
.
It's believed that skipping RadialAttn on the first layer (dense_block = 1
) and the first time step (dense_timestep = 1
) improves the quality.
RadialAttn requires specific video sizes and lengths. The 'number of video tokens' must be divisible by 128. For Wan 14B, this number is computed by width/16 * height/16 * (length+3)/4
. See video_token_num for details.
(A misunderstanding is that the width and the height must be divisible by 128, but that's actually not the case.)