ComfyUI Extension: Efficiency Nodes for ComfyUI Version 2.0+
A collection of ComfyUI custom nodes to help streamline workflows and reduce total node count.[w/NOTE: This node is originally created by LucianoCirino, but the a/original repository is no longer maintained and has been forked by a new maintainer. To use the forked version, you should uninstall the original version and REINSTALL this one.]
Custom Nodes (453)
- AddNoise
- Adaptive Projected Guidance
- Apply ControlNet Stack
- Audio Adjust Volume
- Audio Concat
- Audio Merge
- BasicGuider
- BasicScheduler
- BetaSamplingScheduler
- ByteDance First-Last-Frame to Video
- ByteDance Image Edit
- ByteDance Image
- ByteDance Reference Images to Video
- ByteDance Image to Video
- ByteDance Seedream 4
- ByteDance Text to Video
- Case Converter
- CFGGuider
- Load Checkpoint With Config (DEPRECATED)
- Load Checkpoint
- Save Checkpoint
- Load CLIP
- CLIPMergeAdd
- CLIPMergeSimple
- CLIPMergeSubtract
- CLIPSave
- CLIP Set Last Layer
- CLIP Text Encode (Prompt)
- CLIPTextEncodeFlux
- CLIPTextEncodeHunyuanDiT
- CLIP Text Encode for Lumina2
- CLIPTextEncodeSD3
- CLIP Vision Encode
- Load CLIP Vision
- Combine Hooks [2]
- Combine Hooks [4]
- Combine Hooks [8]
- ConditioningAverage
- Conditioning (Combine)
- Conditioning (Concat)
- Conditioning (Set Area)
- Conditioning (Set Area with Percentage)
- ConditioningSetAreaPercentageVideo
- ConditioningSetAreaStrength
- Cond Set Default Combine
- Conditioning (Set Mask)
- Cond Set Props
- Cond Set Props Combine
- ConditioningSetTimestepRange
- ConditioningStableAudio
- Timesteps Range
- ConditioningZeroOut
- Context Windows (Manual)
- Apply ControlNet (OLD)
- Apply ControlNet
- Apply Controlnet with VAE
- ControlNetInpaintingAliMamaApply
- Load ControlNet Model
- Control Net Stacker
- Create Hook Keyframe
- Create Hook Keyframes From Floats
- Create Hook Keyframes Interp.
- Create Hook LoRA
- Create Hook LoRA (MO)
- Create Hook Model as LoRA
- Create Hook Model as LoRA (MO)
- Create Video
- CropMask
- Load ControlNet Model (diff)
- Differential Diffusion
- DiffusersLoader
- DisableNoise
- DualCFGGuider
- DualCLIPLoader
- EasyCache
- Efficient Loader
- Eff. Loader SDXL
- Empty Audio
- EmptyHunyuanImageLatent
- EmptyHunyuanLatentVideo
- EmptyImage
- Empty Latent Audio
- EmptyLatentHunyuan3Dv2
- Empty Latent Image
- EmptySD3LatentImage
- Evaluate Floats
- Evaluate Integers
- Evaluate Strings
- ExponentialScheduler
- ExtendIntermediateSigmas
- FeatherMask
- FlipSigmas
- FluxDisableGuidance
- FluxGuidance
- FluxKontextImageScale
- Flux.1 Kontext [max] Image
- FluxKontextMultiReferenceLatentMethod
- Flux.1 Kontext [pro] Image
- Flux.1 Canny Control Image
- Flux.1 Depth Control Image
- Flux.1 Expand Image
- Flux.1 Fill Image
- Flux 1.1 [pro] Ultra Image
- FreeU
- FreeU_V2
- FreSca
- Google Gemini Image
- Gemini Input Files
- Google Gemini
- Get Image Size
- Get Video Components
- GLIGENLoader
- GLIGENTextBoxApply
- GrowMask
- HighRes-Fix Script
- Hunyuan3Dv2Conditioning
- Hunyuan3Dv2ConditioningMultiView
- HunyuanImageToVideo
- HunyuanRefinerLatent
- HypernetworkLoader
- Ideogram V1
- Ideogram V2
- Ideogram V3
- ImageAddNoise
- Batch Images
- ImageColorToMask
- ImageCompositeMasked
- Image Crop
- ImageFlip
- ImageFromBatch
- Invert Image
- Image Only Checkpoint Loader (img2vid model)
- ImageOnlyCheckpointSave
- Image Overlay
- Pad Image for Outpainting
- ImageRotate
- Upscale Image
- Upscale Image By
- ImageScaleToMaxDimension
- Image Stitch
- Convert Image to Mask
- Upscale Image (using Model)
- InpaintModelConditioning
- InvertMask
- Join Image with Alpha
- Join XY Inputs of Same Type
- KarrasScheduler
- Kling Image to Video (Camera Control)
- Kling Camera Controls
- Kling Text to Video (Camera Control)
- Kling Dual Character Video Effects
- Kling Image to Video
- Kling Image Generation
- Kling Lip Sync Video with Audio
- Kling Lip Sync Video with Text
- Kling Video Effects
- Kling Start-End Frame to Video
- Kling Text to Video
- Kling Video Extend
- Kling Virtual Try On
- KSampler
- KSampler (Advanced)
- KSampler Adv. (Efficient)
- KSampler (Efficient)
- KSampler SDXL (Eff.)
- KSamplerSelect
- LaplaceScheduler
- LatentAdd
- LatentApplyOperation
- LatentApplyOperationCFG
- LatentBatch
- LatentBatchSeedBehavior
- Latent Blend
- Latent Composite
- LatentCompositeMasked
- LatentConcat
- Crop Latent
- LatentCut
- Flip Latent
- Latent From Batch
- LatentInterpolate
- LatentMultiply
- LatentOperationSharpen
- LatentOperationTonemapReinhard
- Rotate Latent
- LatentSubtract
- Upscale Latent
- Upscale Latent By
- LazyCache
- Load 3D
- Load 3D - Animation
- Load Audio
- Load Image
- Load Image (as Mask)
- Load Image (from Outputs)
- Load Image Dataset from Folder
- Load Image and Text Dataset from Folder
- LoadLatent
- Load Video
- Load LoRA
- LoraLoaderModelOnly
- Load LoRA Model
- Extract and Save Lora
- LoRA Stacker
- LoRA Stack to String converter
- Plot Loss Graph
- Luma Concepts
- Luma Image to Image
- Luma Text to Image
- Luma Image to Video
- Luma Reference
- Luma Text to Video
- Mahiro is so cute that she deserves a better guidance function!! (。・ω・。)
- Manual XY Entry Info
- MaskComposite
- MaskPreview
- Convert Mask to Image
- MiniMax Hailuo Video
- MiniMax Image to Video
- MiniMax Text to Video
- ModelComputeDtype
- ModelMergeAdd
- ModelMergeAuraflow
- ModelMergeBlocks
- ModelMergeCosmos14B
- ModelMergeCosmos7B
- ModelMergeCosmosPredict2_14B
- ModelMergeCosmosPredict2_2B
- ModelMergeFlux1
- ModelMergeLTXV
- ModelMergeMochiPreview
- ModelMergeQwenImage
- ModelMergeSD1
- ModelMergeSD2
- ModelMergeSD3_2B
- ModelMergeSD35_Large
- ModelMergeSDXL
- ModelMergeSimple
- ModelMergeSubtract
- ModelMergeWAN2_1
- ModelPatchLoader
- ModelSamplingAuraFlow
- ModelSamplingContinuousEDM
- ModelSamplingContinuousV
- ModelSamplingDiscrete
- ModelSamplingFlux
- ModelSamplingSD3
- ModelSamplingStableCascade
- ModelSave
- Moonvalley Marey Image to Video
- Moonvalley Marey Text to Video
- Moonvalley Marey Video to Video
- ImageMorphology
- Noise Control Script
- OpenAI ChatGPT Advanced Options
- OpenAI ChatGPT
- OpenAI DALL·E 2
- OpenAI DALL·E 3
- OpenAI GPT Image 1
- OpenAI ChatGPT Input Files
- OpenAI Sora - Video
- Pack SDXL Tuple
- Cond Pair Combine
- Cond Pair Set Default Combine
- Cond Pair Set Props
- Cond Pair Set Props Combine
- PatchModelAddDownscale (Kohya Deep Shrink)
- Perp-Neg (DEPRECATED by PerpNegGuider)
- Pikadditions (Video Object Insertion)
- Pikaffects (Video Effects)
- Pika Image to Video
- Pika Scenes (Video Image Composition)
- Pika Start and End Frame to Video
- Pika Swaps (Video Object Replacement)
- Pika Text to Video
- PixVerse Image to Video
- PixVerse Template
- PixVerse Text to Video
- PixVerse Transition Video
- PolyexponentialScheduler
- Porter-Duff Image Composite
- Preview 3D
- Preview 3D - Animation
- Preview Any
- Preview Audio
- Preview Image
- Boolean
- Float
- Int
- String
- String (Multiline)
- QwenImageDiffsynthControlnet
- RandomNoise
- Rebatch Images
- Rebatch Latents
- Record Audio
- Recraft Color RGB
- Recraft Controls
- Recraft Creative Upscale Image
- Recraft Crisp Upscale Image
- Recraft Image Inpainting
- Recraft Image to Image
- Recraft Remove Background
- Recraft Replace Background
- Recraft Style - Digital Illustration
- Recraft Style - Infinite Style Library
- Recraft Style - Logo Raster
- Recraft Style - Realistic Image
- Recraft Text to Image
- Recraft Text to Vector
- Recraft Vectorize Image
- Regex Extract
- Regex Match
- Regex Replace
- RepeatImageBatch
- Repeat Latent Batch
- RescaleCFG
- ResizeAndPadImage
- Rodin 3D Generate - Detail Generate
- Rodin 3D Generate - Gen-2 Generate
- Rodin 3D Generate - Regular Generate
- Rodin 3D Generate - Sketch Generate
- Rodin 3D Generate - Smooth Generate
- Runway First-Last-Frame to Video
- Runway Image to Video (Gen3a Turbo)
- Runway Image to Video (Gen4 Turbo)
- Runway Text to Image
- SamplerCustom
- SamplerCustomAdvanced
- SamplerDPMAdaptative
- SamplerDPMPP_2M_SDE
- SamplerDPMPP_2S_Ancestral
- SamplerDPMPP_3M_SDE
- SamplerDPMPP_SDE
- SamplerER_SDE
- SamplerEulerAncestral
- SamplerEulerAncestralCFG++
- SamplerEulerCFG++
- SamplerLMS
- SamplerSASolver
- SamplingPercentToSigma
- SaveAnimatedPNG
- SaveAnimatedWEBP
- Save Audio (FLAC)
- Save Audio (MP3)
- Save Audio (Opus)
- SaveGLB
- Save Image
- SaveLatent
- Save LoRA Weights
- SaveSVGNode
- Save Video
- SDTurboScheduler
- Self-Attention Guidance
- Set CLIP Hooks
- SetFirstSigma
- Set Hook Keyframes
- Set Latent Noise Mask
- SetUnionControlNetType
- Simple Eval Examples
- SkipLayerGuidanceDiT
- SkipLayerGuidanceDiTSimple
- SkipLayerGuidanceSD3
- SolidMask
- Split Audio Channels
- Split Image with Alpha
- SplitSigmas
- SplitSigmasDenoise
- Stability AI Audio Inpaint
- Stability AI Audio To Audio
- Stability AI Stable Diffusion 3.5 Image
- Stability AI Stable Image Ultra
- Stability AI Text To Audio
- Stability AI Upscale Conservative
- Stability AI Upscale Creative
- Stability AI Upscale Fast
- Compare
- Concatenate
- Contains
- Length
- Replace
- Substring
- Trim
- Apply Style Model
- Load Style Model
- SVD_img2vid_Conditioning
- Tangential Damping CFG
- TextEncodeHunyuanVideo_ImageToVideo
- ThresholdMask
- Tiled Upscaler Script
- Train LoRA
- Trim Audio Duration
- TripleCLIPLoader
- Tripo: Convert model
- Tripo: Image to Model
- Tripo: Multiview to Model
- Tripo: Refine Draft model
- Tripo: Retarget rigged model
- Tripo: Rig model
- Tripo: Text to Model
- Tripo: Texture model
- unCLIPCheckpointLoader
- unCLIPConditioning
- Load Diffusion Model
- Unpack SDXL Tuple
- Load Upscale Model
- USOStyleReference
- VAE Decode
- VAE Decode Audio
- VAEDecodeHunyuan3D
- VAE Decode (Tiled)
- VAE Encode
- VAE Encode Audio
- VAE Encode (for Inpainting)
- VAE Encode (Tiled)
- Load VAE
- VAESave
- Google Veo 3 Video Generation
- Google Veo 2 Video Generation
- VideoLinearCFGGuidance
- VideoTriangleCFGGuidance
- Vidu Image To Video Generation
- Vidu Reference To Video Generation
- Vidu Start End To Video Generation
- Vidu Text To Video Generation
- VoxelToMesh
- VoxelToMeshBasic
- VPScheduler
- WAN Context Windows (Manual)
- Wan Image to Image
- Wan Image to Video
- Wan Text to Image
- Wan Text to Video
- Webcam Capture
- XY Input: Add/Return Noise
- XY Input: Aesthetic Score
- XY Input: CFG Scale
- XY Input: Checkpoint
- XY Input: Clip Skip
- XY Input: Control Net
- XY Input: Control Net Plot
- XY Input: Denoise
- XY Input: LoRA
- XY Input: LoRA Plot
- XY Input: LoRA Stacks
- XY Input: Manual XY Entry
- XY Input: Prompt S/R
- XY Input: Refiner On/Off
- XY Input: Sampler/Scheduler
- XY Input: Seeds++ Batch
- XY Input: Steps
- XY Input: VAE
- XY Plot
README
✨🍬Planning to help this branch stay alive and any issues will try to solve or fix .. But will be slow as I run many github repos . before raising any issues, please update comfyUI to the latest and esnure all the required packages are updated ass well. Share your workflow in issues to retest same at our end and update the patch.🍬
<b> Efficiency Nodes for ComfyUI Version 2.0+
A collection of <a href="https://github.com/comfyanonymous/ComfyUI" >ComfyUI</a> custom nodes to help streamline workflows and reduce total node count.
Releases
Please check out our WIKI for any use cases and new developments including workflow and settings.<br> Efficiency Nodes Wiki<br>
Nodes:
<!--------------------------------------------------------------------------------------------------------------------------------------------------------> <details> <summary><b>Efficient Loader</b> & <b>Eff. Loader SDXL</b></summary> <ul> <li>Nodes that can load & cache Checkpoint, VAE, & LoRA type models. <i>(cache settings found in config file 'node_settings.json')</i></li> <li>Able to apply LoRA & Control Net stacks via their <code>lora_stack</code> and <code>cnet_stack</code> inputs.</li> <li>Come with positive and negative prompt text boxes. You can also set the way you want the prompt to be <a href="https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb">encoded</a> via the <code>token_normalization</code> and <code>weight_interpretation</code> widgets.</li> <li>These node's also feature a variety of custom menu options as shown below. <p></p><img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes//NodeMenu%20-%20Efficient%20Loaders.png" width="240" style="display: inline-block;"></p> <p><i>note: "🔍 View model info..." requires <a href="https://github.com/pythongosssss/ComfyUI-Custom-Scripts">ComfyUI-Custom-Scripts</a> to be installed to function.</i></p></li> <li>These loaders are used by the <b>XY Plot</b> node for many of its plot type dependencies.</li> </ul> <p align="center"> <img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/NODE%20-%20Efficient%20Loader.png" width="240" style="display: inline-block;"> <img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/NODE%20-%20Eff.%20Loader%20SDXL.png" width="240" style="display: inline-block;"> </p> </details> <!--------------------------------------------------------------------------------------------------------------------------------------------------------> <details> <summary><b>KSampler (Efficient)</b>, <b>KSampler Adv. (Efficient)</b>, <b>KSampler SDXL (Eff.)</b></summary>- Modded KSamplers with the ability to live preview generations and/or vae decode images.
- Feature a special seed box that allows for a clearer management of seeds. <i>(-1 seed to apply the selected seed behavior)</i>
- Can execute a variety of scripts, such as the <b>XY Plot</b> script. To activate the <code>script</code>, simply connect the input connection.
-
A group of node's that are used in conjuction with the Efficient KSamplers to execute a variety of 'pre-wired' set of actions.
-
Script nodes can be chained if their input/outputs allow it. Multiple instances of the same Script Node in a chain does nothing.
<p align="center"> <img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/ScriptChain.png" width="1080"> </p> <!--------------------------------------------------------------------------------------------------------------------------------------------------------> <details> <summary><b>XY Plot</b></summary> <ul> <li>Node that allows users to specify parameters for the Efficiency KSamplers to plot on a grid.</li> </ul> <p align="center"> <img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/XY%20Plot%20-%20Node%20Example.png" width="1080"> </p> </details> <!--------------------------------------------------------------------------------------------------------------------------------------------------------> <details> <summary><b>HighRes-Fix</b></summary> <ul> <li>Node that the gives user the ability to upscale KSampler results through variety of different methods.</li> <li>Comes out of the box with popular Neural Network Latent Upscalers such as Ttl's <a href="https://github.com/Ttl/ComfyUi_NNLatentUpscale">ComfyUi_NNLatentUpscale</a> and City96's <a href="https://github.com/city96/SD-Latent-Upscaler">SD-Latent-Upscaler</a>.</li> <li>Supports ControlNet guided latent upscaling. <i> (You must have Fannovel's <a href="https://github.com/Fannovel16/comfyui_controlnet_aux">comfyui_controlnet_aux</a> installed to unlock this feature)</i></li> <li> Local models---The node pulls the required files from huggingface hub by default. You can create a models folder and place the modules there if you have a flaky connection or prefer to use it completely offline, it will load them locally instead. The path should be: ComfyUI/custom_nodes/efficiency-nodes-comfyui/models; Alternatively, just clone the entire HF repo to it: (git clone https://huggingface.co/city96/SD-Latent-Upscaler) to ComfyUI/custom_nodes/efficiency-nodes-comfyui/models</li> </ul> <p align="center"> <img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/HighResFix%20-%20Node%20Example.gif" width="1080"> </p> </details> <!--------------------------------------------------------------------------------------------------------------------------------------------------------> <details> <summary><b>Noise Control</b></summary> <ul> <li>This node gives the user the ability to manipulate noise sources in a variety of ways, such as the sampling's RNG source.</li> <li>The <a href="https://github.com/shiimizu/ComfyUI_smZNodes">CFG Denoiser</a> noise hijack was developed by smZ, it allows you to get closer recreating Automatic1111 results.</li> <p></p><i>Note: The CFG Denoiser does not work with a variety of conditioning types such as ControlNet & GLIGEN</i></p> <li>This node also allows you to add noise <a href="https://github.com/chrisgoringe/cg-noise">Seed Variations</a> to your generations.</li> <li>For trying to replicate Automatic1111 images, this node will help you achieve it. Encode your prompt using "length+mean" <code>token_normalization</code> with "A1111" <code>weight_interpretation</code>, set the Noise Control Script node's <code>rng_source</code> to "gpu", and turn the <code>cfg_denoiser</code> to true.</li> </ul> <p align="center"> <img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/NODE%20-%20Noise%20Control%20Script.png" width="320"> </p> </details> <!--------------------------------------------------------------------------------------------------------------------------------------------------------> <details> <summary><b>Tiled Upscaler</b></summary> <ul> <li>The Tiled Upscaler script attempts to encompas BlenderNeko's <a href="https://github.com/BlenderNeko/ComfyUI_TiledKSampler">ComfyUI_TiledKSampler</a> workflow into 1 node.</li> <li>Script supports Tiled ControlNet help via the options.</li> <li>Strongly recommend the <code>preview_method</code> be "vae_decoded_only" when running the script.</li> </ul> <p align="center"> <img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/Tiled%20Upscaler%20-%20Node%20Example.gif" width="1080"> </p> </details> <!--------------------------------------------------------------------------------------------------------------------------------------------------------> <details> <summary><b>AnimateDiff</b></summary> <ul> <li>To unlock the AnimateDiff script it is required you have installed Kosinkadink's <a href="https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved">ComfyUI-AnimateDiff-Evolved</a>.</li> <li>The latent <code>batch_size</code> when running this script becomes your frame count.</li> </ul> <p align="center"> <img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/AnimateDiff%20-%20Node%20Example.gif" width="1080"> </p> </details>
Workflow Examples:
Kindly load all PNG files in same name in the (workflow driectory) to comfyUI to get all this workflows. The PNG files have the json embedded into them and are easy to drag and drop !<br>
-
HiRes-Fixing<br> <img src="https://github.com/jags111/efficiency-nodes-comfyui/blob/main/workflows/HiResfix_workflow.png" width="800"><br>
-
SDXL Refining & Noise Control Script<br> <img src="https://github.com/jags111/efficiency-nodes-comfyui/blob/main/workflows/SDXL_base_refine_noise_workflow.png" width="800"><br>
-
XY Plot: LoRA <code>model_strength</code> vs <code>clip_strength</code><br> <img src="https://github.com/jags111/efficiency-nodes-comfyui/blob/main/workflows/Eff_XYPlot%20-%20LoRA%20Model%20vs%20Clip%20Strengths01.png" width="800"><br>
-
Stacking Scripts: XY Plot + Noise Control + HiRes-Fix<br> <img src="https://github.com/LucianoCirino/efficiency-nodes-comfyui/blob/v2.0/workflows/XYPlot%20-%20Seeds%20vs%20Checkpoints%20%26%20Stacked%20Scripts.png" width="800"><br>
-
Stacking Scripts: HiRes-Fix (with ControlNet)<br> <img src="https://github.com/jags111/efficiency-nodes-comfyui/blob/main/workflows/eff_animatescriptWF001.gif" width="800"><br>
-
SVD workflow: Stable Video Diffusion + Kohya Hires* (with latent control)<br>
<br>
Dependencies
The python library <i><a href="https://github.com/danthedeckie/simpleeval" >simpleeval</a></i> is required to be installed if you wish to use the Simpleeval Nodes.
<pre>pip install simpleeval</pre>Also can be installed with a simple pip command <br> 'pip install simpleeval'
A single file library for easily adding evaluatable expressions into python projects. Say you want to allow a user to set an alarm volume, which could depend on the time of day, alarm level, how many previous alarms had gone off, and if there is music playing at the time.
check Notes for more information.
Install:
To install, drop the "efficiency-nodes-comfyui" folder into the "...\ComfyUI\ComfyUI\custom_nodes" directory and restart UI.
Todo
[ ] Add guidance to notebook
Comfy Resources
Efficiency Linked Repos
- BlenderNeko ComfyUI_ADV_CLIP_emb by@BlenderNeko
- Chrisgoringe cg-noise by@Chrisgoringe
- pythongosssss ComfyUI-Custom-Scripts by@pythongosssss
- shiimizu ComfyUI_smZNodes by@shiimizu
- LEv145_images-grid-comfyUI-plugin) by@LEv145
- ltdrdata-ComfyUI-Inspire-Pack by@ltdrdata
- pythongosssss-ComfyUI-custom-Scripts by@pythongosssss
- RockOfFire-ComfyUI_Comfyroll_CustomNodes by@RockOfFire
Guides:
-
ComfyUI Community Manual (eng) by @BlenderNeko
-
Extensions and Custom Nodes:
-
Plugins for Comfy List (eng) by @WASasquatch
-
Tomoaki's personal Wiki (jap) by @tjhayasaka
Support
If you create a cool image with our nodes, please show your result and message us on twitter at @jags111 or @NeuralismAI .
You can join the <a href="https://discord.gg/vNVqT82W" alt="Neuralism Discord"> NEURALISM AI DISCORD </a> or <a href="https://discord.gg/UmSd4qyh" alt =Jags AI Discord > JAGS AI DISCORD </a> Share your work created with this model. Exchange experiences and parameters. And see more interesting custom workflows.
Support us in Patreon for more future models and new versions of AI notebooks.
- tip me on <a href="https://www.patreon.com/jags111"> [patreon]</a>
My buymeacoffee.com pages and links are here and if you feel you are happy with my work just buy me a coffee !
<a href="https://www.buymeacoffee.com/jagsAI"> coffee for JAGS AI</a>
Thank you for being awesome!
<img src = "images/ComfyUI_temp_vpose_00005_.png" width = "50%"> <!-- end support-pitch -->