a collection of nodes to explore Vector and image manipulation
#a collection of nodes to explore Vector and image manipulation
✨🍬Please note the work in this repo is not completed and still in progress and code will break until all things are sorted. Please wait as will announce same in the update here. Thanks for your support. Also note for any issues on the nodes, kindly share your workflow and alos always update comfyUI base including all dependencies.✨🍬
<img src = "images/00_01_00005_.png" width = "50%"><img src = "images/2023-12-03_23-10-27.png" width = "50%" > </br>
<img src = "images/2023-12-03_23-02-42.png" width = "50%" > </br>
You can see more examples of the workflow and proper selection of models for each type of segmentation masks in the <a href = "https://github.com/jags111/ComfyUI_Jags_VectorMagic/wiki">VECTOR MAGIC WIKI </a></br>
CLIPSeg adds a minimal decoder on top of a frozen CLIP model for zero- and one-shot image segmentation. The CLIPSeg node generates a binary mask for a given input image and text prompt. <img src = "images/2023-12-08_14-59-13.png" width = "50%" > </br> <img src = "images/2023-12-08_15-02-32.png" width = "50%" > </br>
Inputs:
Outputs:
The CombineSegMasks node combines two or optionally three masks into a single mask to improve masking of different areas.<br>
<img src = "images/2023-12-08_17-34-47.png" width = "50%" > </br> <img src = "images/2023-12-08_17-40-42.png" width = "50%" > </br>
Inputs:
Outputs:
<img src = "images/JagsvectorworkSDXL_tiledsampler_explore001.png" width = "50%" ><br>
<img src = "images/00UP-00_00003_.png" width = "50%" >Link to the workflow and explanations : <a href= "https://github.com/jags111/ComfyUI_Jags_VectorMagic/wiki"> WIKI </a>
Many thanks for a wonderful flower segmentation pipeline from <a href= "https://github.com/mendez-luisjose/Flower-Instance-Segmentation-Model"> Flower-Instance-Segmentation</a> which is now added to the Yolov8 models repo for exploration. <br>
The python library <i><a href="https://github.com/danthedeckie/simpleeval" >simpleeval</a></i> is required to be installed if you wish to use the expression Nodes.
<pre>pip install simpleeval</pre>Also can be installed with a simple pip command <br> 'pip install simpleeval'
A single file library for easily adding evaluatable expressions into python projects. Say you want to allow a user to set an alarm volume, which could depend on the time of day, alarm level, how many previous alarms had gone off, and if there is music playing at the time.
check Notes for more information.
To install, drop the "ComfyUI_Jags_VectorMagic" folder into the "...\ComfyUI\custom_nodes" directory and restart UI.<br> Ensure all the requirements.txt dependencies are met.<br> But the best method is to install same from ComfyUI Manager (https://github.com/ltdrdata/ComfyUI-Manager) and search for this name in the Node list and install from there and restart the UI as it takes care of all the dependencies and installs and make it easy for you.
Put all Seg models in comfyUI/models/Yolov8<br> Download Yolov8 seg detector and other collections from the following resources;* <a href = "https://huggingface.co/jags/yolov8_model_segmentation-set"> YOLOv8 model collections </a><br>
Ultralytics Detection models : <a href = "https://docs.ultralytics.com/tasks/detect/"> YOLOv8m </a><br>
Ultralytics Segmentation models : <a href = "https://docs.ultralytics.com/tasks/segment/"> YOLOv8m-seg, </a> </br>
Ultralytics Pose models : Also available incase you want to explore : <a href = "(https://docs.ultralytics.com/tasks/pose/)"> POSE MODELS (yolov8n-pose.pt yolov8s-pose.pt yolov8m-pose.pt yolov8l-pose.pt yolov8x-pose.pt yolov8x-pose-p6.pt) </a> Please note teh pose modesl need a seperate type of node to work and cannot be deployed in normal segmentaion mask node. </br>
ComfyUI node for the <a href = "https://huggingface.co/docs/transformers/main/en/model_doc/clipseg" > [CLIPSeg model] </a> to generate masks for image inpainting tasks based on text prompts.<br>
Download <a href = "https://huggingface.co/CIDAS/clipseg-rd64-refined/tree/main"> clipseg model </a> and place it in [comfy\models\clipseg] directory for the node to work<br>
Ensure your models directory is having the following structure comfyUI--- models----clipseg; it should have all the files from the huggingface repo inside including config.json to work well. This needs to be checked.
Ensure all the files are copied to your clipseg directory for the clipseg to work properly. <br>
[ ] Add guidance to notebook
ComfyUI_Jags_VectorMagic Linked Repos
Guides:
ComfyUI Community Manual (eng) by @BlenderNeko
Extensions and Custom Nodes:
Plugins for Comfy List (eng) by @WASasquatch
Tomoaki's personal Wiki (jap) by @tjhayasaka
If you create a cool image with our nodes, please show your result and message us on twitter at @jags111 or @NeuralismAI .
You can join the <a href="https://discord.gg/vNVqT82W" alt="Neuralism Discord"> NEURALISM AI DISCORD </a> or <a href="https://discord.gg/UmSd4qyh" alt =Jags AI Discord > JAGS AI DISCORD </a> Share your work created with this model. Exchange experiences and parameters. And see more interesting custom workflows.
Support us in Patreon for more future models and new versions of AI notebooks.
My buymeacoffee.com pages and links are here and if you feel you are happy with my work just buy me a coffee !
<a href="https://www.buymeacoffee.com/jagsAI"> coffee for JAGS AI</a>
Thank you for being awesome!
<img src = "images/00_01_00009_.png" width = "50%"> <!-- end support-pitch -->