'✂️ Inpaint Crop' is a node that crops an image before sampling. The context area can be specified via the mask, expand pixels and expand factor or via a separate (optional) mask. '✂️ Inpaint Stitch' is a node that stitches the inpainted image back into the original image without altering unmasked areas.
ComfyUI-Inpaint-CropAndStitch
Copyright (c) 2024-2025, Luis Quesada Torres - https://github.com/lquesada | www.luisquesada.com
Check ComfyUI here: https://github.com/comfyanonymous/ComfyUI
The '✂️ Inpaint Crop' and '✂️ Inpaint Stitch' nodes enable inpainting only on masked area very easily
"✂️ Inpaint Crop" crops the image around the masked area (optionally with a context area that marks all parts relevant to the context), taking care of pre-resizing the image if desired, extending it for outpainting, filling mask holes, growing or blurring the mask, cutting around a larger context area, and resizing the cropped area to a target resolution.
The cropped image can be used in any standard workflow for sampling.
Then, the "✂️ Inpaint Stitch" node stitches the inpainted image back into the original image without altering unmasked areas.
The main advantages of inpainting only in a masked area with these nodes are:
Note: this video tutorial is for the previous version of the nodes, but still it shows how to use them. The parameters are mostly the same.
downscale_algorithm
and upscale_algorithm
: Which algorithms to use when resizing an image up or down.preresize
: Shows options to resize the input image before any cropping: to ensure minimum resolution, to ensure maximum resolution, to ensure both minimum and maximum resolution. This makes it very convenient to ensure that any input images have a certain resolution.mask_fill_holes
: Whether to fully fill any holes (small or large) in the mask, that is, mark fully enclosed areas as part of the mask.mask_expand_pixels
: Grows the mask by the specified amount of pixels.mask_invert
: Whether to fully invert the mask, that is, only keep what was masked, instead of removing what was marked.mask_blend_pixels
: Grows the stitch mask and blurs it by the specified amount of pixels, so that the stitch is slowly blended and there are no seams.mask_hipass_filter
: Ignores mask values lower than the one specified here. This is to avoid sections in the mask that are almost 0 (black) to count as masked area. Sometimes that leads to confusion, as the user believes the area is not really masked and the node is considering it as masked.extend_for_outpainting
: Shows options to extend the mask in any/all directions (up/down/left/right) by a certain factor. >1 extends the image, e.g. 2 extends the image in a direction by the same amount of space the image takes. <1 crops the image, e.g. 0.75 removes 25% of the image on that direction.context_from_mask_extend_factor
: Extends the context area by a factor of the size of the mask. The higher this value is, the more area will be cropped around the mask for the model to have more context. 1 means do not grow. 2 means grow the same size of the mask across every direction.output_resize_to_target_size
: Forces that the cropped image has a specific resolution. This may involve resizing and extending out of the original image, but the stitch node reverts those changes to integrate the image seamlessly.output_padding
: Ensures that the cropped image width and height are a multiple of this padding value. Models require images to be padded to a certain value (8, 16, 32) to function properly.This example inpaints by sampling on a small section of the larger image, upscaling to fit 512x512, then stitching and blending back in the original image.
Download the following example workflow from here or drag and drop the screenshot into ComfyUI.
This example uses Flux. Requires the GGUF nodes.
Models used:
Flux Dev Q5 GGUF
from here. Put it in models/unet/.Flux 1. dev controlnet inpainting beta
from here. Put it in models/controlnet/.t5 GGUF Q3_K_L
from here. Put it in models/clip/.clip_l
from here. Put it in models/clip/.ae VAE
from here. Put it in models/vae/.Download the following example workflow from here or drag and drop the screenshot into ComfyUI.
Install via ComfyUI-Manager or go to the custom_nodes/ directory and run $ git clone https://github.com/lquesada/ComfyUI-Inpaint-CropAndStitch.git
Use an inpainting model e.g. lazymixRealAmateur_v40Inpainting.
Use "InpaintModelConditioning" instead of "VAE Encode (for Inpainting)" to be able to set denoise values lower than 1.
Enable "resize to target size" and set it to a preferred resolution for your model, e.g. 512x512 for SD 1.5, 1024x1024 for SDXL or Flux.
This repository uses some code from comfy_extras (https://github.com/comfyanonymous/ComfyUI), KJNodes (https://github.com/kijai/ComfyUI-KJNodes), and Efficiency Nodes (https://github.com/LucianoCirino/efficiency-nodes-comfyui), all of them licensed under GNU GENERAL PUBLIC LICENSE Version 3.
GNU GENERAL PUBLIC LICENSE Version 3, see LICENSE