ComfyUI nodes for video object segmentation using a/SAMURAI model.
ComfyUI nodes for video object segmentation using SAMURAI model.
Note: It is recommended to use Conda environment for installation and running the nodes. Make sure to use the same Conda environment for both ComfyUI and SAMURAI installation! It is highly recommended to use the console version of ComfyUI
Requirements
Follow the SAMURAI installation guide to install the base model
Clone this repository into your ComfyUI custom nodes directory:
cd ComfyUI/custom_nodes
git clone https://github.com/takemetosiberia/ComfyUI-SAMURAI--SAM2-.git samurai_nodes
Copy the SAMURAI installation folder into ComfyUI/custom_nodes/samurai_nodes/
Download model weights as described in SAMURAI guide
After installation, your directory structure should look like this:
ComfyUI/
└── custom_nodes/
└── samurai_nodes/
├── samurai/ # SAMURAI model installation
├── init.py # Module initialization
├── samurai_node.py
└── utils.py
Most dependencies are included with SAMURAI installation. Additional required packages:
pip install hydra-core omegaconf loguru
The workflow consists of three main nodes:
Allows selecting a region of interest (box) in the first frame of a video sequence.
Enables point-based object selection in the first frame.
Performs video object segmentation using selected area.
For more examples and details, see SAMURAI documentation.
If you encounter any issues:
samurai/sam2/checkpoints
directoryFor CUDA-related issues, ensure your Conda environment has the correct PyTorch version with CUDA support.