A ComfyUI extension allowing the interrogation of booru tags from images.
A ComfyUI extension allowing the interrogation of booru tags from images.
Based on SmilingWolf/wd-v1-4-tags and toriato/stable-diffusion-webui-wd14-tagger
All models created by SmilingWolf
git clone https://github.com/pythongosssss/ComfyUI-WD14-Tagger
into the custom_nodes
folder
custom_nodes\ComfyUI-WD14-Tagger
custom_nodes\ComfyUI-WD14-Tagger
folder you just created
cd C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-WD14-Tagger
or wherever you have it installed../../../python_embeded/python.exe -s -m pip install -r requirements.txt
pip install -r requirements.txt
Add the node via image
-> WD14Tagger|pysssss
Models are automatically downloaded at runtime if missing.
Supports tagging and outputting multiple batched inputs.
MOAT
and the most popular is ConvNextV2
.Quick interrogation of images is also available on any node that is displaying an image, e.g. a LoadImage
, SaveImage
, PreviewImage
node.
Simply right click on the node (or if displaying multiple images, on the image you want to interrogate) and select WD14 Tagger
from the menu
Settings used for this are in the settings
section of pysssss.json
.
Simplest way is to use it online, interrogate an image, and the model will be downloaded and cached, however if you want to manually download the models:
models
folder (in same folder as the wd14tagger.py
)pysssss.json
model.onnx
and name it with the model name e.g. wd-v1-4-convnext-tagger-v2.onnx
selected_tags.csv
and name it with the model name e.g. wd-v1-4-convnext-tagger-v2.csv
onnxruntime
(recommended, interrogation is still fast on CPU, included in requirements.txt)
or onnxruntime-gpu
(allows use of GPU, many people have issues with this, if you try I can't provide support for this)