ComfyUI Extension: LanPaint

Authored by scraed

Created

Updated

593 stars

Achieve seamless inpainting results without needing a specialized inpainting model.

README

<div align="center">

LanPaint: Universal Inpainting Sampler with "Think Mode"

arXiv Python Benchmark ComfyUI Extension Hugging Face Blog GitHub stars

</div>

Universally applicable inpainting ability for every model. LanPaint sampler lets the model "think" through multiple iterations before denoising, enabling you to invest more computation time for superior inpainting quality.

This is the official implementation of "Lanpaint: Training-Free Diffusion Inpainting with Exact and Fast Conditional Inference", accepted by TMLR. The repository is for ComfyUI extension. Local Python benchmark code is published here: LanPaintBench.

🎬 NEW: LanPaint now supports video inpainting and outpainting based on Wan 2.2!

<div align="center">

| Original Video | Mask (edit T-shirt text) | Inpainted Result | |:--------------:|:----:|:----------------:| | Original | Mask | Result |

Video Inpainting Example: 81 frames with temporal consistency

</div>

Check our latest Wan 2.2 Video Examples, Wan 2.2 Image Examples, and Qwen Image Edit 2509 support.

Table of Contents

Features

  • Universal Compatibility – Works instantly with almost any model (SD 1.5, XL, 3.5, Flux, HiDream, Qwen-Image, Wan2.2 or custom LoRAs) and ControlNet.
    Inpainting Result 13
  • No Training Needed – Works out of the box with your existing model.
  • Easy to Use – Same workflow as standard ComfyUI KSampler.
  • Flexible Masking – Supports any mask shape, size, or position for inpainting/outpainting.
  • No Workarounds – Generates 100% new content (no blending or smoothing) without relying on partial denoising.
  • Beyond Inpainting – You can even use it as a simple way to generate consistent characters.

Warning: LanPaint has degraded performance on distillation models, such as Flux.dev, due to a similar issue with LORA training. Please use low flux guidance (1.0-2.0) to mitigate this issue.

Quickstart

  1. Install ComfyUI: Follow the official ComfyUI installation guide to set up ComfyUI on your system. Or ensure your ComfyUI version > 0.3.11.
  2. Install ComfyUI-Manager: Add the ComfyUI-Manager for easy extension management.
  3. Install LanPaint Nodes:
    • Via ComfyUI-Manager: Search for "LanPaint" in the manager and install it directly.
    • Manually: Click "Install via Git URL" in ComfyUI-Manager and input the GitHub repository link:
      https://github.com/scraed/LanPaint.git
      
      Alternatively, clone this repository into the ComfyUI/custom_nodes folder.
  4. Restart ComfyUI: Restart ComfyUI to load the LanPaint nodes.

Once installed, you'll find the LanPaint nodes under the "sampling" category in ComfyUI. Use them just like the default KSampler for high-quality inpainting!

How to Use Examples:

  1. Navigate to the example folder (i.e example_1), download all pictures.
  2. Drag InPainted_Drag_Me_to_ComfyUI.png into ComfyUI to load the workflow.
  3. Download the required model (i.e clicking Model Used in This Example).
  4. Load the model in ComfyUI.
  5. Upload Masked_Load_Me_in_Loader.png to the "Load image" node in the "Mask image for inpainting" group (second from left), or the Prepare Image node.
  6. Queue the task, you will get inpainted results from LanPaint. Some example also gives you inpainted results from the following methods for comparison:

Video Examples (Beta)

LanPaint now supports video inpainting with Wan 2.2, enabling you to seamlessly inpaint masked regions across video frames while maintaining temporal consistency.

Note: LanPaint supports video inpainting for longer sequences (e.g., 81 frames), but processing time increases significantly (please check the Resource Consumption section for details) and performance may become unstable. For optimal results and stability, we recommend limiting video inpainting to 40 frames or fewer.

Wan 2.2 Video Inpainting

Example: Wan2.2 t2v 14B, 480p video (11:6), 40 frames, LanPaint K Sampler, 2 steps of thinking

| Original Video | Mask (Add a white hat) | Inpainted Result | |:--------------:|:----:|:----------------:| | Original Video | Mask | Inpainted Result |

View Workflow & Masks

You need to follow the ComfyUI version of Wan2.2 T2V workflow to download and install the T2V model.

Wan 2.2 Video Outpainting

Extend your videos beyond their original boundaries with LanPaint's video outpainting capability based on Wan 2.2. This feature allows you to expand the canvas of your videos while maintaining coherent motion and context.

Example: Wan2.2 t2v 14B, 480p video (1:1 outpaint to 11:6), 40 frames, LanPaint K Sampler, 2 steps of thinking

| Original Video | Mask (Expand to 880x480) | Outpainted Result | |:--------------:|:----:|:-----------------:| | Original Video | Mask | Outpainted Result |

View Workflow & Masks

You need to follow the ComfyUI version of Wan2.2 T2V workflow to download and install the T2V model.

Resource Consumption

<table> <thead> <tr> <th align="left">Processing Mode</th> <th align="left">Resolution</th> <th align="left">Frames Processed</th> <th align="left">VRAM Required</th> <th align="left">Total Runtime (20 steps)</th> </tr> </thead> <tbody> <tr style="background-color: #e8f4f8;"> <td><strong>Inpainting</strong></td> <td>880×480 (11:6)</td> <td>40 frames</td> <td>39.8 GB</td> <td><strong>05:37 min</strong></td> </tr> <tr style="background-color: #e8f4f8;"> <td><strong>Inpainting</strong></td> <td>480×480 (1:1)</td> <td>40 frames</td> <td>38.0 GB</td> <td><strong>05:35 min</strong></td> </tr> <tr style="background-color: #e8f4f8;"> <td><strong>Outpainting</strong></td> <td>880×480 (11:6)</td> <td>40 frames</td> <td>40.2 GB</td> <td><strong>05:36 min</strong></td> </tr> <tr style="background-color: #fff4e6;"> <td><strong>Inpainting</strong></td> <td>880×480 (11:6)</td> <td>81 frames</td> <td>43.3 GB</td> <td><strong>16:23 min</strong></td> </tr> <tr style="background-color: #fff4e6;"> <td><strong>Inpainting</strong></td> <td>480×480 (1:1)</td> <td>81 frames</td> <td>39.8 GB</td> <td><strong>14:25 min</strong></td> </tr> <tr style="background-color: #fff4e6;"> <td><strong>Outpainting</strong></td> <td>880×480 (11:6)</td> <td>81 frames</td> <td>42.6 GB</td> <td><strong>13:46 min</strong></td> </tr> </tbody> </table>

<sub>Test Platform: All tests were conducted on an NVIDIA RTX Pro 6000.<br> Model Used: wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors and wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors.<br> Processing Steps: 20 sampling steps x 2 (LanPaint steps of thinking).</sub>

Note: To further reduce VRAM requirements, we recommend loading CLIP on CPU.

Image Examples

Example Wan2.2: InPaint(LanPaint K Sampler, 5 steps of thinking)

We are excited to announce that LanPaint now supports Wan2.2 text to image generation with Wan2.2 T2V model.

Inpainting Result 45
View Workflow & Masks

You need to follow the ComfyUI version of Wan2.2 T2V workflow to download and install the T2V model.

Example Wan2.2: Partial InPaint(LanPaint K Sampler, 5 steps of thinking)

Sometimes we don't want to inpaint completely new content, but rather let the inpainted image reference the original image. One option to achieve this is to inpaint with an edit model like Qwen Image Edit. Another option is to perform a partial inpaint: allowing the diffusion process to start at some middle steps rather than from 0.

Inpainting Result 46
View Workflow & Masks

You need to follow the ComfyUI version of Wan2.2 T2V workflow to download and install the T2V model.

Example Qwen Edit 2509: InPaint

Check our latest updated Mased Qwen Edit Workflow for Qwen Image Edit 2509. Download the model at Qwen Image Edit 2509 Comfy.

Qwen Result 3

Example Qwen Edit 2508: InPaint

Qwen Result 2 Check Mased Qwen Edit Workflow. You need to follow the ComfyUI version of Qwen Image Edit workflow to download and install the model.

Example Qwen Image: InPaint(LanPaint K Sampler, 5 steps of thinking)

Inpainting Result 14
View Workflow & Masks

You need to follow the ComfyUI version of Qwen Image workflow to download and install the model.

The following examples utilize a random seed of 0 to generate a batch of 4 images for variance demonstration and fair comparison. (Note: Generating 4 images may exceed your GPU memory; please adjust the batch size as necessary.)

Qwen Result 1 Also check Qwen Inpaint Workflow and Qwen Outpaint Workflow. You need to follow the ComfyUI version of Qwen Image workflow to download and install the model.

Example HiDream: InPaint (LanPaint K Sampler, 5 steps of thinking)

Inpainting Result 8
View Workflow & Masks

You need to follow the ComfyUI version of HiDream workflow to download and install the model.

Example HiDream: OutPaint(LanPaint K Sampler, 5 steps of thinking)

Inpainting Result 8
View Workflow & Masks

You need to follow the ComfyUI version of HiDream workflow to download and install the model. Thanks Amazon90 for providing this example.

Example SD 3.5: InPaint(LanPaint K Sampler, 5 steps of thinking)

Inpainting Result 8
View Workflow & Masks

You need to follow the ComfyUI version of SD 3.5 workflow to download and install the model.

Example Flux: InPaint(LanPaint K Sampler, 5 steps of thinking)

Inpainting Result 7
View Workflow & Masks Model Used in This Example (Note: Prompt First mode is disabled on Flux. As it does not use CFG guidance.)

Example SDXL 0: Character Consistency (Side View Generation) (LanPaint K Sampler, 5 steps of thinking)

Inpainting Result 6
View Workflow & Masks Model Used in This Example

(Tricks 1: You can emphasize the character by copy it's image multiple times with Photoshop. Here I have made one extra copy.)

(Tricks 2: Use prompts like multiple views, multiple angles, clone, turnaround. Use LanPaint's Prompt first mode (does not support Flux))

(Tricks 3: Remeber LanPaint can in-paint: Mask non-consistent regions and try again!)

Example SDXL 1: Basket to Basket Ball (LanPaint K Sampler, 2 steps of thinking).

Inpainting Result 1
View Workflow & Masks Model Used in This Example

Example SDXL 2: White Shirt to Blue Shirt (LanPaint K Sampler, 5 steps of thinking)

Inpainting Result 2
View Workflow & Masks Model Used in This Example

Example SDXL 3: Smile to Sad (LanPaint K Sampler, 5 steps of thinking)

Inpainting Result 3
View Workflow & Masks Model Used in This Example

Example SDXL 4: Damage Restoration (LanPaint K Sampler, 5 steps of thinking)

Inpainting Result 4
View Workflow & Masks Model Used in This Example

Example SDXL 5: Huge Damage Restoration (LanPaint K Sampler, 20 steps of thinking)

Inpainting Result 5
View Workflow & Masks Model Used in This Example

Check more for use cases like inpaint on fine tuned models and face swapping, thanks to Amazon90.

Usage

Workflow Setup
Same as default ComfyUI KSampler - simply replace with LanPaint KSampler nodes. The inpainting workflow is the same as the SetLatentNoiseMask inpainting workflow.

Note

  • LanPaint requires binary masks (values of 0 or 1) without opacity or smoothing. To ensure compatibility, set the mask's opacity and hardness to maximum in your mask editor. During inpainting, any mask with smoothing or gradients will automatically be converted to a binary mask.
  • LanPaint relies heavily on your text prompts to guide inpainting - explicitly describe the content you want generated in the masked area. If results show artifacts or mismatched elements, counteract them with targeted negative prompts.

Basic Sampler

Samplers

  • LanPaint KSampler: The most basic and easy to use sampler for inpainting.
  • LanPaint KSampler (Advanced): Full control of all parameters.

LanPaint KSampler

Simplified interface with recommended defaults:

  • Steps: 20 - 50. More steps will give more "thinking" and better results.
  • LanPaint NumSteps: The turns of thinking before denoising. Recommend 5 for most of tasks ( which means 5 times slower than sampling without thinking). Use 10 for more challenging tasks.
  • LanPaint Prompt mode: Image First mode and Prompt First mode. Image First mode focuses on the image, inpaint based on image context (maybe ignore prompt), while Prompt First mode focuses more on the prompt. Use Prompt First mode for tasks like character consistency. (Technically, it Prompt First mode change CFG scale to negative value in the BIG score to emphasis prompt, which will costs image quality.)

LanPaint KSampler (Advanced)

Full parameter control: Key Parameters

| Parameter | Range | Description | |-----------|-------|-------------| | Steps | 0-100 | Total steps of diffusion sampling. Higher means better inpainting. Recommend 20-50. | | LanPaint_NumSteps | 0-20 | Reasoning iterations per denoising step ("thinking depth"). Easy task: 2-5. Hard task: 5-10 | | LanPaint_Lambda | 0.1-50 | Content alignment strength (higher = stricter). Recommend 4.0 - 10.0 | | LanPaint_StepSize | 0.1-1.0 | The StepSize of each thinking step. Recommend 0.1-0.5. | | LanPaint_Beta | 0.1-2.0 | The StepSize ratio between masked / unmasked region. Small value can compensate high lambda values. Recommend 1.0 | | LanPaint_Friction | 0.0-100.0 | The friction of Langevin dynamics. Higher means more slow but stable, lower means fast but unstable. Recommend 10.0 - 20.0| | LanPaint_EarlyStop | 0-10 | Stop LanPaint iteration before the final sampling step. Helps to remove artifacts in some cases. Recommend 1-5| | LanPaint_PromptMode | Image First / Prompt First | Image First mode focuses on the image context, maybe ignore prompt. Prompt First mode focuses more on the prompt. |

For detailed descriptions of each parameter, simply hover your mouse over the corresponding input field to view tooltips with additional information.

LanPaint Mask Blend

This node blends the original image with the inpainted image based on the mask. It is useful if you want the unmasked region to match the original image pixel perfectly.

LanPaint KSampler (Advanced) Tuning Guide

For challenging inpainting tasks:

1️⃣ Boost Quality Increase total number of sampling steps (very important!), LanPaint_NumSteps (thinking iterations) or LanPaint_Lambda if the inpainted result does not meet your expectations.

2️⃣ Boost Speed Decrease LanPaint_NumSteps to accelerate generation! If you want better results but still need fewer steps, consider: - Increasing LanPaint_StepSize to speed up the thinking process. - Decreasing LanPaint_Friction to make the Langevin dynamics converges more faster.

3️⃣ Fix Unstability:
If you find the results have wired texture, try

  • Reduce LanPaint_Friction to make the Langevin dynamics more stable.
  • Reduce LanPaint_StepSize to use smaller step size.
  • Reduce LanPaint_Beta if you are using a high lambda value.

⚠️ Notes:

  • For effective tuning, fix the seed and adjust parameters incrementally while observing the results. This helps isolate the impact of each setting. Better to do it with a batche of images to avoid overfitting on a single image.

Community Showcase

Discover how the community is using LanPaint! Here are some user-created tutorials:

Submit a PR to add your tutorial/video here, or open an Issue with details!

Updates

  • 2025/08/08
    • Add Qwen image support
  • 2025/06/21
    • Update the algorithm with enhanced stability and outpaint performance.
    • Add outpaint example
    • Supports Sampler Custom (Thanks to MINENEMA)
  • 2025/06/04
    • Add more sampler support.
    • Add early stopping to advanced sampler.
  • 2025/05/28
    • Major update on the Langevin solver. It is now much faster and more stable.
    • Greatly simplified the parameters for advanced sampler.
    • Fix performance issue on Flux and SD 3.5
  • 2025/04/16
    • Added Primary HiDream support
  • 2025/03/22
    • Added Primary Flux support
    • Added Tease Mode
  • 2025/03/10
    • LanPaint has received a major update! All examples now use the LanPaint K Sampler, offering a simplified interface with enhanced performance and stability.
  • 2025/03/06:

ToDo

  • Try Implement Detailer
  • ~~Provide inference code on without GUI.~~ Check our local Python benchmark code LanPaintBench.

Citation

@misc{zheng2025lanpainttrainingfreediffusioninpainting,
      title={Lanpaint: Training-Free Diffusion Inpainting with Exact and Fast Conditional Inference}, 
      author={Candi Zheng and Yuan Lan and Yang Wang},
      year={2025},
      eprint={2502.03491},
      archivePrefix={arXiv},
      primaryClass={eess.IV},
      url={https://arxiv.org/abs/2502.03491}, 
}