'✂️ Inpaint Crop' is a node that crops an image before sampling. The context area can be specified via the mask, expand pixels and expand factor or via a separate (optional) mask. '✂️ Inpaint Stitch' is a node that stitches the inpainted image back into the original image without altering unmasked areas.
ComfyUI-Inpaint-CropAndStitch
Copyright (c) 2024, Luis Quesada Torres - https://github.com/lquesada | www.luisquesada.com
Check ComfyUI here: https://github.com/comfyanonymous/ComfyUI
"✂️ Inpaint Crop" is a node that crops an image before sampling. The context area can be specified via the mask, expand pixels and expand factor or via a separate (optional) mask.
"✂️ Inpaint Stitch" is a node that stitches the inpainted image back into the original image without altering unmasked areas.
"✂️ Extend Image for Outpainting" is a node that extends an image and masks in order to use the power of Inpaint Crop and Stich (rescaling, blur, blend, restitching) for outpainting.
"✂️ Resize Image Before Inpainting" is a node that resizes an image before inpainting, for example to upscale it to keep more detail than in the original image.
The main advantages of inpainting only in a masked area with these nodes are:
context_expand_pixels
: how much to grow the context area (i.e. the area for the sampling) around the original mask, in pixels. This provides more context for the sampling.context_expand_factor
: how much to grow the context area (i.e. the area for the sampling) around the original mask, as a factor, e.g. 1.1 is grow 10% of the size of the mask.fill_mask_holes
: Whether to fully fill any holes (small or large) in the mask, that is, mark fully enclosed areas as part of the mask.blur_mask_pixels
: Grows the mask and blurs it by the specified amount of pixels.invert_mask
: Whether to fully invert the mask, that is, only keep what was marked, instead of removing what was marked.blend_pixels
: Grows the stitch mask and blurs it by the specified amount of pixels, so that the stitch is slowly blended and there are no seams.rescale_algorithm
: Rescale algorithm to use. bislerp is for super high quality but very slow, recommended for stich. bicubic is high quality and faster, recommended for crop.mode
: Free size, Forced size, or Ranged size.
min_width
, max_width
, min_height
, and max_height
, with a padding
to align to standard sizes, then rescales before stitching back.force_width
and force_height
and upscales the content to take that size before sampling, then downscales before stitching back. Use forced size e.g. for SDXL.rescale_factor
to optionally rescale the content before sampling and eventually scale back before stitching, and padding
to align to standard sizes.This example inpaints by sampling on a small section of the larger image, upscaling to fit 512x512-768x768, then stitching and blending back in the original image.
Download the following example workflow from here or drag and drop the screenshot into ComfyUI.
This example uses Flux. Requires the GGUF nodes.
Models used:
Flux Dev Q5 GGUF
from here. Put it in models/unet/.Flux 1. dev controlnet inpainting beta
from here. Put it in models/controlnet/.t5 GGUF Q3_K_L
from here. Put it in models/clip/.clip_l
from here. Put it in models/clip/.ae VAE
from here. Put it in models/vae/.Download the following example workflow from here or drag and drop the screenshot into ComfyUI.
Install via ComfyUI-Manager or go to the custom_nodes/ directory and run $ git clone https://github.com/lquesada/ComfyUI-Inpaint-CropAndStitch.git
Use an inpainting model e.g. lazymixRealAmateur_v40Inpainting.
Use "InpaintModelConditioning" instead of "VAE Encode (for Inpainting)" to be able to set denoise values lower than 1.
If you want to inpaint fast with SD 1.5, use ranged size with min width and height 512 and max width and height 768 with padding 32. Set high rescale_factor (e.g. 10), it will be adapted to the right resolution.
If you want to inpaint with SDXL, use forced size = 1024.
The image is resized (e.g. upsized) before cropping the inpaint and context area. If the mask is too small compared to the image, the crop node will try to resize the image to a very large size first, which is memory inefficient and would cause a memory overflow. See https://github.com/lquesada/ComfyUI-Inpaint-CropAndStitch/issues/42
This repository uses some code from comfy_extras (https://github.com/comfyanonymous/ComfyUI), KJNodes (https://github.com/kijai/ComfyUI-KJNodes), and Efficiency Nodes (https://github.com/LucianoCirino/efficiency-nodes-comfyui), all of them licensed under GNU GENERAL PUBLIC LICENSE Version 3.
GNU GENERAL PUBLIC LICENSE Version 3, see LICENSE