Custom nodes for ComfyUI implementing the csm model for text-to-speech generation.
A custom node for ComfyUI that adds CLIP skip functionality to vanilla WAN workflow using CLIP. This node allows you to skip a specified number of layers in a CLIP model, which can adjust the style or quality of image embeddings in generation pipelines.
custom_nodes
directory:cd ComfyUI/custom_nodes
git clone https://github.com/yourusername/ComfyUI-CLIPSkip.git
Restart ComfyUI. The node will be automatically loaded.
Ensure you have the required WAN CLIP model (e.g., umt5_xxl_fp8_e4m3fn_scaled.safetensors) in ComfyUI/models/text_encoders/.
ComfyUI/custom_nodes
directory.ComfyUI-CLIPSkip
if needed.No additional dependencies are required.
CLIPVisionLoader
or any other node that outputs CLIP_VISION
.clip_vision
output to the clip
input of CLIPSkip
.skip_layers
parameter (e.g., 1 to skip the last layer, 0 to disable skipping).clip
to any node that accepts CLIP_VISION
(e.g., CLIPVisionEncode
).CLIPVisionLoader -> CLIPSkip -> CLIPVisionEncode -> (further pipeline)
MIT License (see LICENSE
file for details).
Feel free to submit issues or pull requests on GitHub!