ComfyUI Extension: ComfyUI-nunchaku
ComfyUI Plugin of Nunchaku
Custom Nodes (418)
- ACEPlusFFTConditioning~
- ACEPlusFFTProcessor~
- ACEPlusFFTLoader~
- AddNoise
- Adaptive Projected Guidance
- Audio Adjust Volume
- Audio Concat
- Audio Merge
- BasicGuider
- BasicScheduler
- BetaSamplingScheduler
- ByteDance First-Last-Frame to Video
- ByteDance Image Edit
- ByteDance Image
- ByteDance Reference Images to Video
- ByteDance Image to Video
- ByteDance Seedream 4
- ByteDance Text to Video
- Case Converter
- CFGGuider
- Load Checkpoint With Config (DEPRECATED)
- Load Checkpoint
- Save Checkpoint
- Load CLIP
- CLIPMergeAdd
- CLIPMergeSimple
- CLIPMergeSubtract
- CLIPSave
- CLIP Set Last Layer
- CLIP Text Encode (Prompt)
- CLIPTextEncodeFlux
- CLIPTextEncodeHunyuanDiT
- CLIP Text Encode for Lumina2
- CLIPTextEncodeSD3
- CLIP Vision Encode
- Load CLIP Vision
- Combine Hooks [2]
- Combine Hooks [4]
- Combine Hooks [8]
- ConditioningAverage
- Conditioning (Combine)
- Conditioning (Concat)
- Conditioning (Set Area)
- Conditioning (Set Area with Percentage)
- ConditioningSetAreaPercentageVideo
- ConditioningSetAreaStrength
- Cond Set Default Combine
- Conditioning (Set Mask)
- Cond Set Props
- Cond Set Props Combine
- ConditioningSetTimestepRange
- ConditioningStableAudio
- Timesteps Range
- ConditioningZeroOut
- Context Windows (Manual)
- Apply ControlNet (OLD)
- Apply ControlNet
- Apply Controlnet with VAE
- ControlNetInpaintingAliMamaApply
- Load ControlNet Model
- Create Hook Keyframe
- Create Hook Keyframes From Floats
- Create Hook Keyframes Interp.
- Create Hook LoRA
- Create Hook LoRA (MO)
- Create Hook Model as LoRA
- Create Hook Model as LoRA (MO)
- Create Video
- CropMask
- Load ControlNet Model (diff)
- Differential Diffusion
- DiffusersLoader
- DisableNoise
- DualCFGGuider
- DualCLIPLoader
- EasyCache
- Empty Audio
- EmptyHunyuanImageLatent
- EmptyHunyuanLatentVideo
- EmptyImage
- Empty Latent Audio
- EmptyLatentHunyuan3Dv2
- Empty Latent Image
- EmptySD3LatentImage
- ExponentialScheduler
- ExtendIntermediateSigmas
- FeatherMask
- FlipSigmas
- FluxDisableGuidance
- FluxGuidance
- FluxKontextImageScale
- Flux.1 Kontext [max] Image
- FluxKontextMultiReferenceLatentMethod
- Flux.1 Kontext [pro] Image
- Flux.1 Canny Control Image
- Flux.1 Depth Control Image
- Flux.1 Expand Image
- Flux.1 Fill Image
- Flux 1.1 [pro] Ultra Image
- FreeU
- FreeU_V2
- FreSca
- Google Gemini Image
- Gemini Input Files
- Google Gemini
- Get Image Size
- Get Video Components
- GLIGENLoader
- GLIGENTextBoxApply
- GrowMask
- Hunyuan3Dv2Conditioning
- Hunyuan3Dv2ConditioningMultiView
- HunyuanImageToVideo
- HunyuanRefinerLatent
- HypernetworkLoader
- Ideogram V1
- Ideogram V2
- Ideogram V3
- ImageAddNoise
- Batch Images
- ImageColorToMask
- ImageCompositeMasked
- Image Crop
- ImageFlip
- ImageFromBatch
- Invert Image
- Image Only Checkpoint Loader (img2vid model)
- ImageOnlyCheckpointSave
- Pad Image for Outpainting
- ImageRotate
- Upscale Image
- Upscale Image By
- ImageScaleToMaxDimension
- Image Stitch
- Convert Image to Mask
- Upscale Image (using Model)
- InpaintModelConditioning
- InvertMask
- Join Image with Alpha
- KarrasScheduler
- Kling Image to Video (Camera Control)
- Kling Camera Controls
- Kling Text to Video (Camera Control)
- Kling Dual Character Video Effects
- Kling Image to Video
- Kling Image Generation
- Kling Lip Sync Video with Audio
- Kling Lip Sync Video with Text
- Kling Video Effects
- Kling Start-End Frame to Video
- Kling Text to Video
- Kling Video Extend
- Kling Virtual Try On
- KSampler
- KSampler (Advanced)
- KSamplerSelect
- LaplaceScheduler
- LatentAdd
- LatentApplyOperation
- LatentApplyOperationCFG
- LatentBatch
- LatentBatchSeedBehavior
- Latent Blend
- Latent Composite
- LatentCompositeMasked
- LatentConcat
- Crop Latent
- LatentCut
- Flip Latent
- Latent From Batch
- LatentInterpolate
- LatentMultiply
- LatentOperationSharpen
- LatentOperationTonemapReinhard
- Rotate Latent
- LatentSubtract
- Upscale Latent
- Upscale Latent By
- LazyCache
- Load 3D
- Load 3D - Animation
- Load Audio
- Load Image
- Load Image (as Mask)
- Load Image (from Outputs)
- Load Image Dataset from Folder
- Load Image and Text Dataset from Folder
- LoadLatent
- Load Video
- Load LoRA
- LoraLoaderModelOnly
- Load LoRA Model
- Extract and Save Lora
- Plot Loss Graph
- Luma Concepts
- Luma Image to Image
- Luma Text to Image
- Luma Image to Video
- Luma Reference
- Luma Text to Video
- Mahiro is so cute that she deserves a better guidance function!! (。・ω・。)
- MaskComposite
- MaskPreview
- Convert Mask to Image
- MiniMax Hailuo Video
- MiniMax Image to Video
- MiniMax Text to Video
- ModelComputeDtype
- ModelMergeAdd
- ModelMergeAuraflow
- ModelMergeBlocks
- ModelMergeCosmos14B
- ModelMergeCosmos7B
- ModelMergeCosmosPredict2_14B
- ModelMergeCosmosPredict2_2B
- ModelMergeFlux1
- ModelMergeLTXV
- ModelMergeMochiPreview
- ModelMergeQwenImage
- ModelMergeSD1
- ModelMergeSD2
- ModelMergeSD3_2B
- ModelMergeSD35_Large
- ModelMergeSDXL
- ModelMergeSimple
- ModelMergeSubtract
- ModelMergeWAN2_1
- ModelPatchLoader
- ModelSamplingAuraFlow
- ModelSamplingContinuousEDM
- ModelSamplingContinuousV
- ModelSamplingDiscrete
- ModelSamplingFlux
- ModelSamplingSD3
- ModelSamplingStableCascade
- ModelSave
- Moonvalley Marey Image to Video
- Moonvalley Marey Text to Video
- Moonvalley Marey Video to Video
- ImageMorphology
- FLUX Depth Preprocessor (Deprecated)
- Nunchaku Installer
- OpenAI ChatGPT Advanced Options
- OpenAI ChatGPT
- OpenAI DALL·E 2
- OpenAI DALL·E 3
- OpenAI GPT Image 1
- OpenAI ChatGPT Input Files
- OpenAI Sora - Video
- Cond Pair Combine
- Cond Pair Set Default Combine
- Cond Pair Set Props
- Cond Pair Set Props Combine
- PatchModelAddDownscale (Kohya Deep Shrink)
- Perp-Neg (DEPRECATED by PerpNegGuider)
- Pikadditions (Video Object Insertion)
- Pikaffects (Video Effects)
- Pika Image to Video
- Pika Scenes (Video Image Composition)
- Pika Start and End Frame to Video
- Pika Swaps (Video Object Replacement)
- Pika Text to Video
- PixVerse Image to Video
- PixVerse Template
- PixVerse Text to Video
- PixVerse Transition Video
- PolyexponentialScheduler
- Porter-Duff Image Composite
- Preview 3D
- Preview 3D - Animation
- Preview Any
- Preview Audio
- Preview Image
- Boolean
- Float
- Int
- String
- String (Multiline)
- QwenImageDiffsynthControlnet
- RandomNoise
- Rebatch Images
- Rebatch Latents
- Record Audio
- Recraft Color RGB
- Recraft Controls
- Recraft Creative Upscale Image
- Recraft Crisp Upscale Image
- Recraft Image Inpainting
- Recraft Image to Image
- Recraft Remove Background
- Recraft Replace Background
- Recraft Style - Digital Illustration
- Recraft Style - Infinite Style Library
- Recraft Style - Logo Raster
- Recraft Style - Realistic Image
- Recraft Text to Image
- Recraft Text to Vector
- Recraft Vectorize Image
- Regex Extract
- Regex Match
- Regex Replace
- RepeatImageBatch
- Repeat Latent Batch
- RescaleCFG
- ResizeAndPadImage
- Rodin 3D Generate - Detail Generate
- Rodin 3D Generate - Gen-2 Generate
- Rodin 3D Generate - Regular Generate
- Rodin 3D Generate - Sketch Generate
- Rodin 3D Generate - Smooth Generate
- Runway First-Last-Frame to Video
- Runway Image to Video (Gen3a Turbo)
- Runway Image to Video (Gen4 Turbo)
- Runway Text to Image
- SamplerCustom
- SamplerCustomAdvanced
- SamplerDPMAdaptative
- SamplerDPMPP_2M_SDE
- SamplerDPMPP_2S_Ancestral
- SamplerDPMPP_3M_SDE
- SamplerDPMPP_SDE
- SamplerER_SDE
- SamplerEulerAncestral
- SamplerEulerAncestralCFG++
- SamplerEulerCFG++
- SamplerLMS
- SamplerSASolver
- SamplingPercentToSigma
- SaveAnimatedPNG
- SaveAnimatedWEBP
- Save Audio (FLAC)
- Save Audio (MP3)
- Save Audio (Opus)
- SaveGLB
- Save Image
- SaveLatent
- Save LoRA Weights
- SaveSVGNode
- Save Video
- SDTurboScheduler
- Self-Attention Guidance
- Set CLIP Hooks
- SetFirstSigma
- Set Hook Keyframes
- Set Latent Noise Mask
- SetUnionControlNetType
- SkipLayerGuidanceDiT
- SkipLayerGuidanceDiTSimple
- SkipLayerGuidanceSD3
- SolidMask
- Split Audio Channels
- Split Image with Alpha
- SplitSigmas
- SplitSigmasDenoise
- Stability AI Audio Inpaint
- Stability AI Audio To Audio
- Stability AI Stable Diffusion 3.5 Image
- Stability AI Stable Image Ultra
- Stability AI Text To Audio
- Stability AI Upscale Conservative
- Stability AI Upscale Creative
- Stability AI Upscale Fast
- Compare
- Concatenate
- Contains
- Length
- Replace
- Substring
- Trim
- Apply Style Model
- Load Style Model
- SVD_img2vid_Conditioning
- Tangential Damping CFG
- TextEncodeHunyuanVideo_ImageToVideo
- ThresholdMask
- Train LoRA
- Trim Audio Duration
- TripleCLIPLoader
- Tripo: Convert model
- Tripo: Image to Model
- Tripo: Multiview to Model
- Tripo: Refine Draft model
- Tripo: Retarget rigged model
- Tripo: Rig model
- Tripo: Text to Model
- Tripo: Texture model
- unCLIPCheckpointLoader
- unCLIPConditioning
- Load Diffusion Model
- Load Upscale Model
- USOStyleReference
- VAE Decode
- VAE Decode Audio
- VAEDecodeHunyuan3D
- VAE Decode (Tiled)
- VAE Encode
- VAE Encode Audio
- VAE Encode (for Inpainting)
- VAE Encode (Tiled)
- Load VAE
- VAESave
- Google Veo 3 Video Generation
- Google Veo 2 Video Generation
- VideoLinearCFGGuidance
- VideoTriangleCFGGuidance
- Vidu Image To Video Generation
- Vidu Reference To Video Generation
- Vidu Start End To Video Generation
- Vidu Text To Video Generation
- VoxelToMesh
- VoxelToMeshBasic
- VPScheduler
- WAN Context Windows (Manual)
- Wan Image to Image
- Wan Image to Video
- Wan Text to Image
- Wan Text to Video
- Webcam Capture
README
<div align="center" id="nunchaku_logo">
<img src="https://huggingface.co/datasets/nunchaku-tech/cdn/resolve/main/logo/v2/nunchaku-compact-transparent.png" alt="logo" width="220"></img>
</div>
<h3 align="center">
<a href="http://arxiv.org/abs/2411.05007"><b>Paper</b></a> | <a href="https://nunchaku.tech/docs/ComfyUI-nunchaku/"><b>Docs</b></a> | <a href="https://hanlab.mit.edu/projects/svdquant"><b>Website</b></a> | <a href="https://hanlab.mit.edu/blog/svdquant"><b>Blog</b></a> | <a href="https://svdquant.mit.edu"><b>Demo</b></a> | <a href="https://huggingface.co/nunchaku-tech"><b>Hugging Face</b></a> | <a href="https://modelscope.cn/organization/nunchaku-tech"><b>ModelScope</b></a>
</h3>
<div align="center">
<a href=https://discord.gg/Wk6PnwX9Sm target="_blank"><img src=https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fdiscord.com%2Fapi%2Finvites%2FWk6PnwX9Sm%3Fwith_counts%3Dtrue&query=%24.approximate_member_count&logo=discord&logoColor=white&label=Discord&color=green&suffix=%20total height=22px></a>
<a href=https://huggingface.co/datasets/nunchaku-tech/cdn/resolve/main/nunchaku/assets/wechat.jpg target="_blank"><img src=https://img.shields.io/badge/WeChat-07C160?logo=wechat&logoColor=white height=22px></a>
<a href=https://deepwiki.com/nunchaku-tech/ComfyUI-nunchaku target="_blank"><img src=https://deepwiki.com/badge.svg height=22px></a>
</div>
This repository provides the ComfyUI plugin for Nunchaku, an efficient inference engine for 4-bit neural networks quantized with SVDQuant. For the quantization library, check out DeepCompressor.
Join our user groups on Discord and WeChat for discussions—details here. If you have any questions, run into issues, or are interested in contributing, feel free to share your thoughts with us!
Nunchaku ComfyUI Plugin
News
- [2025-09-24] 🔥 Released 4-bit 4/8-step Qwen-Image-Edit-2509 lightning models at Hugging Face! Try them out with this workflow!
- [2025-09-24] 🔥 Released 4-bit Qwen-Image-Edit-2509! Models are available on Hugging Face. Try them out with this workflow!
- [2025-09-09] 🔥 Released 4-bit Qwen-Image-Edit together with the 4/8-step Lightning variants! Models are available on Hugging Face. Try them out with this workflow!
- [2025-09-04] 🚀 Official release of Nunchaku v1.0.0! Qwen-Image now supports asynchronous offloading, cutting Transformer VRAM usage to as little as 3 GiB with no performance loss. You can also try our pre-quantized 4/8-step Qwen-Image-Lightning models on Hugging Face or ModelScope.
- [2025-08-23] 🚀 v1.0.0 adds support for Qwen-Image! Check this workflow to get started. LoRA support is coming soon.
- [2025-07-17] 📘 The official ComfyUI-nunchaku documentation is now live! Explore comprehensive guides and resources to help you get started.
- [2025-06-29] 🔥 v0.3.3 now supports FLUX.1-Kontext-dev! Download the quantized model from Hugging Face or ModelScope and use this workflow to get started.
- [2025-06-11] Starting from v0.3.2, you can now easily install or update the Nunchaku wheel using this workflow!
- [2025-06-07] 🚀 Release Patch v0.3.1! We bring back FB Cache support and fix 4-bit text encoder loading. PuLID nodes are now optional and won’t interfere with other nodes. We've also added a NunchakuWheelInstaller node to help you install the correct Nunchaku wheel.
- [2025-06-01] 🚀 Release v0.3.0! This update adds support for multiple-batch inference, ControlNet-Union-Pro 2.0 and initial integration of PuLID. You can now load Nunchaku FLUX models as a single file, and our upgraded 4-bit T5 encoder now matches FP8 T5 in quality!
- [2025-04-16] 🎥 Released tutorial videos in both English and Chinese to assist installation and usage.
- [2025-04-09] 📢 Published the April roadmap and an FAQ to help the community get started and stay up to date with Nunchaku’s development.
- [2025-04-05] 🚀 Release v0.2.0! This release introduces multi-LoRA and ControlNet support, with enhanced performance using FP16 attention and First-Block Cache. We've also added 20-series GPU compatibility and official workflows for FLUX.1-redux!
Getting Started
- Installation Guide
- Usage Tutorial
- Example Workflows
- Node Reference
- API Reference
- Custom Model Quantization: DeepCompressor
- Contribution Guide
- Frequently Asked Questions