This custom node offers the following functionalities: API support for setting up API requests, computer vision primarily for masking or collages, and general utility to streamline workflow setup or implement essential missing features.
Miscellaneous assortment of custom nodes for ComfyUI.
The nature of the nodes is varied, and they do not provide a comprehensive solution for any particular kind of application. The nodes can be roughly categorized in the following way:
In order to keep the documentation brief and to the point, I will use the following icons for special nodes.
Furthermore, I won't provide any documentation for api nodes, as I think there are better, more comprehensive and already documented, solutions available.
| Node | Description |
|------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| String | Just a string (text). In case you want it written before connecting to a node or if some custom node does not work properly with the PrimitiveNode. |
| Add String to Many β | Will append/prepend the string to_add
to all the other strings. |
| Color Clip | Clips the color
(or all the other colors) from an image. Both the target color or the complement can be set to white, black or remain untouched. |
| Color Clip ADE20k ποΈ | Similar to Color Clip, but you pick the color from the ADE20k class list. Only useful for ADE20k semantic segmented images. |
| MonoMerge | Selects the maximum (or minimum) value between two images. Mainly used for mask composition. |
| AdjustRect | Receives a rectangle and returns a new rectangle that shares the same center but with width adjusted to a multiple of xm
and height to a multiple of ym
. Setting round_mode
to exact will return a rectangle with the exact defined dimensions. |
| Repeat Into Grid | Tiles the provided image/latent into a grid of columns
xrows
tiles. |
| Conditioning Grid (cond) β | Creates conditioning areas of size width
xheight
, forming a grid of columns
xrows
conditioning areas. The inputs notation can be read as: r{row}_c{column}. strength
is the strength to by applied in all the areas, and base
is the base conditioning prior to setting the tiles conditioning. |
| Conditioning Grid (string) β | Similar to Conditioning Grid (cond), but generates the conditioning from the given strings (only). |
| Conditioning Grid (string) Advanced π β | Similar to Conditioning Grid (string), but requires BlenderNeko's Advanced CLIP Text Encode. |
| VAEEncodeBatch β | Receives multiples images and encodes them into a latent batch. |
| AnyToAny β β οΈ | Can be used to convert data between different formats or compute stuff. The input data can be used in the expression using the letter v
. |
| CLIPEncodeMultiple β | Receives individual strings β CLIPEncodes each β returns conditioning list. |
| CLIPEncodeMultipleAdvanced π β | Same as CLIPEncodeMultiple, but using BlenderNeko's Advanced CLIP Text Encode. |
| ControlNetHadamard | Receives a list of conditionings and a list of images β Applies contronet only once per conditioning/image pair (does not apply every image to every conditioning). |
| ControlNetHadamard (manual) β | Similar to ControlNetHadamard but images are set via individual inputs. |
| ToCondList β | Receives individual conditionings β returns a list with all the input conditionings. |
| ToLatentList β | Receives individual latents β returns a list with all the input latents. |
| ToImageList β | Receives individual images β returns a list with all the input images. |
| FromListGetConds β | Receives a list of conditionings β returns the conditionings via individual slots. |
| FromListGetLatents β | Receives a list of latents β returns the latents via individual slots. |
| FromListGetImages β | Receives a list of images β returns the images via individual slots. |
Nodes under the CV separator use or expose openCV functionalities.
I will only provide partial documentation here, to clarify how to use the more complex nodes. The remaining nodes usage should be clear given the nodes' names.
Returns a mask, in image format, with the result of the grabcut.
<details> <summary> usage </summary>The tresh
input should be a gray image, possibly a mask in black and white but not necessarily (read thresholds).
It is used to set most of the grabcut input mask's flags, excluding GC_BGD
(sure background) which are set by the "frame".
The "frame" - border margins of the image - has its size defined via the pixels
input, and won't affect sides set to
be ignored by the frame_option
input (the corners common to neighbor sides will still be painted on the ignored sides).
The threshold inputs indicate the intensity threshold's used to set GC_PR_FGD
(probable foreground) or GC_FGD
(foreground).
The values equal or above the thresholds are set with the indicated flag. They can be setup in the following manners:
The thresholds also work as safeguards against potential misleading or inconsistent input images, where the image may appear to be only black and white, but actually contains values besides 0s and 255s.
Similar to Framed Mask Grab Cut, but uses thresh_maybe
to set the probable foreground, and thresh_sure
to set the foreground.
The threshold
value is the same for both thresh image inputs; the GC_FGD
flags are set by the thresh_sure
on top of the GC_PR_FGD
flags set by thresh_maybe
.
Will output contours depending on their fitness, where the fitness function must be provided within the node's text box.
The expression may be long but can't have multiple instructions, only a single line that returns the fitness when evaluated.
<details> <summary> usage </summary>Select
argument options:
MAX
, MIN
: select the contour (singular) with higher and lower fitness respectively, the evaluated expression should result in a number.FILTER
: filters the contours (plural) that satisfy the fitness condition, the evaluated expression should result in a boolean.MODE
: selects the contour (singular) whose fitness score is the mode of all the contours fitness scores.To compute the fitness, the input parameters can be used with the following names:
c
: the contour being evaluated, from input contoursi
: input image (optional)a
: input auxiliary contour (optional)Functions from the math, opencv and numpy modules can be used with the prefixes: m
; cv
; and np
, respectively.
Additionally, functions listed below can also be used without a prefix.
The following is an example fitness function to get the contour that best matches the auxiliary contour (the lower the value, the better the match):
cv.matchShapes(c,a,1,0.0)
List of available functions:
math.sqrt(4 * area / math.pi)
does not cache result
All the listed functions cache the results at least once (details vary); they don't create computational overhead for being called more than once. This behavior was also added to the following list of opencv functions, which must be called without the cv prefix:
true
)