ComfyUI implementation of a/LayerDiffusion.
ComfyUI implementation of https://github.com/layerdiffusion/LayerDiffuse.
Download the repository and unpack it into the custom_nodes folder in the ComfyUI installation directory.
Or clone via GIT, starting from ComfyUI installation directory:
cd custom_nodes
git clone [email protected]:huchenlei/ComfyUI-layerdiffuse.git
Run pip install -r requirements.txt
to install python dependencies. You might experience version conflict on diffusers if you have other extensions that depend on other versions of diffusers. In this case, it is recommended to set up separate Python venvs.
If you want more control of getting RGB images and alpha channel mask separately, you can use this workflow.
Blending given FG
Blending given BG
Forge impl's sanity check sets Stop at
to 0.5 to get better quality BG.
This workflow might be inferior compared to other object removal workflows.
In SD Forge impl, there is a stop at
param that determines when
layer diffuse should stop in the denoising process. In the background, what this param does is unapply the LoRA and c_concat cond after a certain step
threshold. This is hard/risky to implement directly in ComfyUI as it requires manually loading a model that has every change except the layer diffusion
change applied. A workaround in ComfyUI is to have another img2img pass on the layer diffuse result to simulate the effect of stop at
param.
Combines previous workflows to generate blended and FG given BG. We found that there are some color variations in the extracted FG. Need to confirm with layer diffusion authors whether this is expected.
Need batch size = 2N. Currently only for SD15.
Need batch size = 2N. Currently only for SD15.
Need batch size = 3N. Currently only for SD15.