ComfyUI Extension: ComfyUI-LTXVideo

Authored by Lightricks

Created

Updated

2,391 stars

Custom nodes for LTX-Video support in ComfyUI

Custom Nodes (52)

README

ComfyUI-LTXVideo

GitHub Website Model LTXV Trainer Demo Paper Discord

ComfyUI-LTXVideo is a collection of custom nodes for ComfyUI, designed to provide useful tools for working with the LTXV model. The model itself is supported in the core ComfyUI code. The main LTXVideo repository can be found here.

๐Ÿš€ New to using LTXV with ComfyUI? See our Getting Started page

โญ 16.07.2025 - LTXV 0.9.8 Release โญ

๐Ÿš€ What's New

  1. LTXV 0.9.8 Model<br> The new model and its distilled variants offer improved prompt understanding and detail generation<br> ๐Ÿ‘‰ 13B Distilled model<br> ๐Ÿ‘‰ 13B Distilled model 8-bit<br> ๐Ÿ‘‰ 2B from 13B Distilled model<br> ๐Ÿ‘‰ 2B from 13B Distilled model 8-bit<br> ๐Ÿ‘‰ IC Lora Detailer<br>

  2. Autoregressive Generation Introducing new ComfyUI nodes that enable virtually infinite video generation. The new LTXV Looping Sampler node allows generation of videos with arbitrary length and consistent motion. ICLoRAs are supported as wellโ€”by providing guidance from existing videos (e.g., depth, pose, or Canny edges), you can generate long videos in a video-to-video manner. ๐Ÿ‘‰ Long Img2Video Generation Flow ๐Ÿ‘‰ Long Video2Video Generation Flow

  3. Detailer ICLoRA Introducing the Detailer ICLoRA, which enhances generated latents with fine details by applying a few additional diffusion steps. This results in significantly more detailed generations. ๐Ÿ‘‰ Detailer ICLoRA Flow

โญ 8.07.2025 - LTXVideo ICLora Release โญ

๐Ÿš€ What's New in LTXVideo ICLoRA

  1. Three New ICLoRA Models Introducing powerful in-context LoRA models that enable precise control over video generation:

  2. New Node: ๐Ÿ…›๐Ÿ…ฃ๐Ÿ…ง LTXV In Context Sampler A dedicated node for seamlessly integrating ICLoRA models into your workflow, enabling fine-grained control over video generation using depth maps, pose estimation, or edge detection.

  3. Example Workflow Check out example workflow for a complete example demonstrating how to use the ICLoRA models effectively.

  4. Custom ICLoRA Training We've released a trainer that allows you to create your own specialized ICLoRA models for custom control signals. Check out the trainer repository to get started.

โญ 9.06.2025 โ€“ LTXVideo VAE Patcher, Mask manipulation and Q8 LoRA loader nodes. โญ

  1. LTXV Patcher VAE<br> The new node improves VAE decoding performance by reducing runtime and cutting memory consumption by up to 50%. This allows generation of higher-resolution outputs on consumer-grade GPUs with limited VRAM, without needing to load the VAE partially or decode in tiles.<br> โš ๏ธ On Windows, you may need to add the paths to the MSVC compiler (cl.exe) and ninja.exe to your system environment PATH variable. <br>
  2. LTXV Preprocess Masks<br> Preprocesses masks for use with the LTXVideo model's latent masking. It validates mask dimensions based on VAE downscaling, supports optional inversion, handles the first frame mask separately, combines temporal masks via max pooling, applies morphological operations to grow or shrink masks, and clamps values to ensure correct opacity. The result is a set of masks ready for latent-space masking.
  3. LTXV Q8 Lora Model Loader<br> Applying LoRA to an FP8-quantized model requires special handling to preserve output quality. It's crucial to apply LoRA weights using the correct precision, as the current LoRA implementation in ComfyUI does so in a non-optimal way. This node addresses that limitation by ensuring LoRA weights are applied properly, resulting in significantly better quality. If you're working with an FP8 LTXV model, using this node guarantees that LoRA behaves as expected and delivers the intended effect.

โญ 14.05.2025 โ€“ LTXVideo 13B 0.9.7 Distilled Release โญ

๐Ÿš€ What's New in LTXVideo 13B 0.9.7 Distilled

  1. LTXV 13B Distilled ๐Ÿฅณ 0.9.7<br> Delivers cinematic-quality videos at fraction of steps needed to run full model. Only 4 or 8 steps needed for single generation.<br> ๐Ÿ‘‰ Download here

  2. LTXV 13B Distilled Quantized 0.9.7<br> Offers reduced memory requirements and even faster inference speeds. Ideal for consumer-grade GPUs (e.g., NVIDIA 4090, 5090).<br> Important: In order to get the best performance with the quantized version please install q8_kernels package and use dedicated flow below. <br> ๐Ÿ‘‰ Download here<br> ๐Ÿงฉ Example ComfyUI flow available in the Example Workflows section.

  3. Updated LTV 13B Quantized version<br> From now on all our 8bit quantized models are running natively in ComfyUI, still with our Q8 patcher node you will get the best inference speed.<br> ๐Ÿ‘‰ Download here<br>

โญ 06.05.2025 โ€“ LTXVideo 13B 0.9.7 Release โญ

๐Ÿš€ What's New in LTXVideo 13B 0.9.7

  1. LTXV 13B 0.9.7 Delivers cinematic-quality videos at unprecedented speed.<br> ๐Ÿ‘‰ Download here

  2. LTXV 13B Quantized 0.9.7 Offers reduced memory requirements and even faster inference speeds. Ideal for consumer-grade GPUs (e.g., NVIDIA 4090, 5090). Delivers outstanding quality with improved performance.<br> Important: In order to run the quantized version please install LTXVideo-Q8-Kernels package and use dedicated flow below. Loading the model in Comfy with LoadCheckpoint node won't work. <br> ๐Ÿ‘‰ Download here<br> ๐Ÿงฉ Example ComfyUI flow available in the Example Workflows section.

  3. Latent Upscaling Models Enables inference across multiple scales by upscaling latent tensors without decoding/encoding. Multiscale inference delivers high-quality results in a fraction of the time compared to similar models.<br> Important: Make sure you put the models below in models/upscale_models folder.<br> ๐Ÿ‘‰ Spatial upscaling: Download here.<br> ๐Ÿ‘‰ Temporal upscaling: Download here.<br> ๐Ÿงฉ Example ComfyUI flow available in the Example Workflows section.

Technical Updates

  1. New simplified flows and nodes<br> 1.1. Simplified image to video: Download here.<br> 1.2. Simplified image to video with extension: Download here.<br> 1.3. Simplified image to video with keyframes: Download here.<br>

17.04.2025 โญ LTXVideo 0.9.6 Release โญ

LTXVideo 0.9.6 introduces:

  1. LTXV 0.9.6 โ€“ higher quality, faster, great for final output. Download from here.
  2. LTXV 0.9.6 Distilled โ€“ our fastest model yet (only 8 steps for generation), lighter, great for rapid iteration. Download from here.

Technical Updates

We introduce the STGGuiderAdvanced node, which applies different CFG and STG parameters at various diffusion steps. All flows have been updated to use this node and are designed to provide optimal parameters for the best quality. See the Example Workflows section.

5.03.2025 โญ LTXVideo 0.9.5 Release โญ

LTXVideo 0.9.5 introduces:

  1. Improved quality with reduced artifacts.
  2. Support for higher resolution and longer sequences.
  3. Frame and sequence conditioning (beyond the first frame).
  4. Enhanced prompt understanding.
  5. Commercial license availability.

Technical Updates

Since LTXVideo is now fully supported in the ComfyUI core, we have removed the custom model implementation. Instead, we provide updated workflows to showcase the new features:

  1. Frame Conditioning โ€“ Enables interpolation between given frames.
  2. Sequence Conditioning โ€“ Allows motion interpolation from a given frame sequence, enabling video extension from the beginning, end, or middle of the original video.
  3. Prompt Enhancer โ€“ A new node that helps generate prompts optimized for the best model performance. See the Example Workflows section for more details.

LTXTricks Update

The LTXTricks code has been integrated into this repository (in the /tricks folder) and will be maintained here. The original repo is no longer maintained, but all existing workflows should continue to function as expected.

22.12.2024

Fixed a bug which caused the model to produce artifacts on short negative prompts when using a native CLIP Loader node.

19.12.2024 โญ Update โญ

  1. Improved model - removes "strobing texture" artifacts and generates better motion. Download from here.
  2. STG support
  3. Integrated image degradation system for improved motion generation.
  4. Additional initial latent optional input to chain latents for high res generation.
  5. Image captioning in image to video flow.