Custom nodes for easier use of SDXL in ComfyUI including an img2img workflow that utilizes both the base and refiner checkpoints.
Custom nodes extension for ComfyUI, including a workflow to use SDXL 1.0 with both the base and refiner checkpoints.
Instead of having separate workflows for different tasks, everything is integrated in one workflow file.
ComfyUI_windows_portable
directorySeargeSDXL-Installer.bat
and SeargeSDXL-Installer.py
in the same directory as the ComfyUI
run_cpu.bat
and run_nvidia_gpu.bat
python_embeded
also exists in the same
directory that you unpacked these install scripts toSeargeSDXL-Installer.bat
script and follow the instructions on screen
python -m pip install opencv-python
in the
python environment for ComfyUI at least once, to install a required dependencyComfyUI/custom_nodes/
directorygit clone https://github.com/SeargeDP/SeargeSDXL.git
SeargeSDXL
folder into the ComfyUI/custom_nodes
directory and restart ComfyUI.ComfyUI/custom_nodes/
directorygit clone
before
git pull
SeargeSDXL
folder from the latest release into ComfyUI/custom_nodes
, overwrite existing filesThese can now also be installed with the new install script (on Windows) instead of manually downloading them.
This workflow depends on certain checkpoint files to be installed in ComfyUI, here is a list of the necessary files that the workflow expects to be available.
If any of the mentioned folders does not exist in ComfyUI/models
, create the missing folder and put the
downloaded file into it.
I recommend to download and copy all these files (the required, recommended, and optional ones) to make full use of all features included in the workflow!
(from Huggingface)
(required) download SDXL 1.0 Base with 0.9 VAE (7 GB)
and copy it into ComfyUI/models/checkpoints
(recommended) download SDXL 1.0 Refiner with 0.9 VAE (6 GB)
and copy it into ComfyUI/models/checkpoints
(optional) download Fixed SDXL 0.9 vae (335 MB)
and copy it into ComfyUI/models/vae
(optional) download SDXL Offset Noise LoRA (50 MB)
and copy it into ComfyUI/models/loras
(recommended) download 4x-UltraSharp (67 MB)
and copy it into ComfyUI/models/upscale_models
(recommended) download 4x_NMKD-Siax_200k (67 MB)
and copy it into ComfyUI/models/upscale_models
(recommended) download 4x_Nickelback_70000G (67 MB)
and copy it into ComfyUI/models/upscale_models
(optional) download 1x-ITF-SkinDiffDetail-Lite-v1 (20 MB)
and copy it into ComfyUI/models/upscale_models
(required) download ControlNetHED (30 MB)
and copy it into ComfyUI/models/annotators
(required) download res101 (531 MB)
and copy it into ComfyUI/models/annotators
(recommended) download clip_vision_g (3.7 GB)
and copy it into ComfyUI/models/clip_vision
(recommended) download control-lora-canny-rank256 (774 MB)
and copy it into ComfyUI/models/controlnet
(recommended) download control-lora-depth-rank256 (774 MB)
and copy it into ComfyUI/models/controlnet
(recommended) download control-lora-recolor-rank256 (774 MB)
and copy it into ComfyUI/models/controlnet
(recommended) download control-lora-sketch-rank256 (774 MB)
and copy it into ComfyUI/models/controlnet
Now everything should be prepared, but you may to have to adjust some file names in the different model selector boxes on the workflow. Do so by clicking on the filename in the workflow UI and selecting the correct file from the list.
<img src="docs/img/main_readme/full_graph.png" width="768">Find information about the latest changes here.
This is a minor update to make the workflow and custom node extension compatible with the latest changes in ComfyUI.
This is a minor update to make the workflow and custom node extension compatible with the latest changes in ComfyUI.
This update added support for FreeU v2 in addition to FreeU v1.
This update contains bug fixes that address issues found after v4.0 was released.
.json
file in the workflow
folderThis update contains bug fixes that address issues found after v4.0 was released.
examples
folder have been updated to embed the v4.1 workflowThis is the first release with the v4.x architecture of the custom node extension.
Some features that were originally in v3.4 or planned for v4.x were not included in the v4.0 release, they are now planned for a future version. This was decided to get this new version released earlier and the missing features should not be important for 99% of users.
So, what is actually missing?
<br><img src="docs/img/main_readme/ui-3.png" width="768">
(5 multi-purpose image inputs for revision and controlnet)
The workflow is included as a .json
file in the workflow
folder.
After updating Searge SDXL, always make sure to load the latest version of the json file if you want to benefit from the latest features, updates, and bugfixes.
(you can check the version of the workflow that you are using by looking at the workflow information box)
Click this link to see the documentation
<img src="docs/img/main_readme/ui-1.png" width="768">(the main UI of the workflow)
The EVOLVED v4.x workflow is a new workflow, created from scratch. It requires the latest additions to the SeargeSDXL custom node extension, because it makes use of some new node types.
The interface for using this new workflow is also designed in a different way, with all parameters that are usually tweaked to generate images tightly packed together. This should make it easier to have every important element on the screen at the same time without scrolling.
<img src="docs/img/main_readme/ui-2.png" width="768">(more advanced UI elements right next to the main UI)
In this mode you can generate images from text descriptions. The source image and the mask (next to the prompt inputs) are not used in this mode.
<img src="docs/img/main_readme/ui_txt2img.png" width="768">(example of using text-to-image in the workflow)
<br> <img src="docs/img/main_readme/result_txt2img.png" width="512">(result of the text-to-image example)
In this mode you can generate images from text descriptions and a source image. The mask (next to the prompt inputs) is not used in this mode.
<img src="docs/img/main_readme/ui_img2img.png" width="768">(example of using image-to-image in the workflow)
<br> <img src="docs/img/main_readme/result_img2img.png" width="512">(result of the image-to-image example)
In this mode you can generate images from text descriptions and a source image. Both, the source image and the mask (next to the prompt inputs) are used in this mode.
This is similar to the image to image mode, but it also lets you define a mask for selective changes of only parts of the image.
<img src="docs/img/main_readme/ui_inpainting.png" width="768">(example of using inpainting in the workflow)
<br> <img src="docs/img/main_readme/result_inpainting.png" width="512">(result of the inpainting example)
A small collection of example images (with embedded workflow) can be found in the examples
folder. Here is an
overview of the included images.