ComfyUI Extension: Depth Estimation Node
A robust custom depth estimation node for ComfyUI using Depth-Anything models. It integrates depth estimation with configurable post-processing options including blur, median filtering, contrast enhancement, and gamma correction.
Custom Nodes (0)
README
<img src="images/depth-estimation-logo-with-smaller-z.svg" width="32" height="32" alt="Depth Estimation Icon" style="vertical-align: middle"> ComfyUI Depth Estimation Node
<div align="center"> <img src="images/depth-estimation-logo-with-smaller-z.svg" width="150" height="150" alt="Depth Estimation Logo"> </div>A robust custom depth estimation node for ComfyUI using Depth-Anything models to generate depth maps from images.
Features
- Multiple model options:
- Depth-Anything-Small
- Depth-Anything-Base
- Depth-Anything-Large
- Depth-Anything-V2-Small
- Depth-Anything-V2-Base
- Depth-Anything-V3-Small (Requires optional dependency)
- Depth-Anything-V3-Base (Requires optional dependency)
- Post-processing options:
- Gaussian blur (adjustable radius)
- Median filtering (configurable size)
- Automatic contrast enhancement
- Gamma correction
- Advanced options:
- Force CPU processing for compatibility
- Force model reload for troubleshooting
- Camera Estimation (New in v1.3.4): Extract camera extrinsics and intrinsics (DA3 models only)
- Raw Depth Output: Option to output metric depth for point clouds
Installation
Method 1: Install via ComfyUI Manager (Recommended)
- Open ComfyUI and install the ComfyUI Manager if you haven't already
- Go to the Manager tab
- Search for "Depth Estimation" and install the node
Method 2: Manual Installation
-
Navigate to your ComfyUI custom nodes directory:
cd ComfyUI/custom_nodes/ -
Clone the repository:
git clone https://github.com/Limbicnation/ComfyUIDepthEstimation.git -
Install the required dependencies:
cd ComfyUIDepthEstimation pip install -r requirements.txt -
(Optional) To enable Depth Anything V3 models:
pip install git+https://github.com/ByteDance-Seed/Depth-Anything-3.git -
Restart ComfyUI to load the new custom node.
Note: On first use, the node will download the selected model from Hugging Face. This may take some time depending on your internet connection.
Usage
<div align="center"> <img src="images/depth-estimation-node-v2.png" width="600" alt="Depth Estimation Node Preview"> </div> <div align="center"> <img src="images/depth_map_generator_showcase.jpg" width="600" alt="Depth Map Generator Showcase"> </div>Example Results (Depth Anything V3)
<div align="center"> <img src="images/saurian_input.jpg" width="45%" alt="Input Image"> <img src="images/da3_output.png" width="45%" alt="DA3 Output"> </div>Node Parameters
Required Parameters
- image: Input image (IMAGE type)
- model_name: Select from available Depth-Anything models
- blur_radius: Gaussian blur radius (0.0 - 10.0, default: 2.0)
- median_size: Median filter size (3, 5, 7, 9, 11)
- apply_auto_contrast: Enable automatic contrast enhancement
- apply_gamma: Enable gamma correction
Optional Parameters
- force_reload: Force the model to reload (useful for troubleshooting)
- force_cpu: Use CPU for processing instead of GPU (slower but more compatible)
- enable_camera_estimation: Enable extraction of camera pose data (DA3 models only)
- output_raw_depth: Output raw metric depth values instead of normalized 0-1 map (useful for 3D reconstruction)
Outputs
The node provides 5 outputs for maximum flexibility:
- depth (IMAGE): Standard normalized depth map (0-1, grayscale)
- confidence (IMAGE): Confidence map visualization
- extrinsics (CAMERA_EXTRINSICS): [N, 3, 4] Tensor of camera extrinsics (OpenCV format)
- intrinsics (CAMERA_INTRINSICS): [N, 3, 3] Tensor of camera intrinsics
- camera_json (STRING): JSON string containing all camera parameters and metadata
Note: For non-DA3 models, the camera outputs will be empty/None.
Video Processing
The node supports video processing via batch inputs. You can load a video using standard ComfyUI video loaders (e.g., "Load Video") or "Load Images from Folder", which pass frames as a batch. The node processes the entire batch efficiently.
Example Usage
- Add the
Depth Estimationnode to your ComfyUI workflow- Tip: You can find example workflows in the
workflows/directory.
- Tip: You can find example workflows in the
- Connect an image source to the node's image input
- Configure the parameters:
- Select a model (e.g., "Depth-Anything-V2-Small" is fastest)
- Adjust blur_radius (0-10) for depth map smoothing
- Choose median_size (3-11) for noise reduction
- Toggle auto_contrast and gamma correction as needed
- Connect the output to a Preview Image node or other image processing nodes
Model Information
| Model Name | Quality | VRAM Usage | Speed | |------------|---------|------------|-------| | Depth-Anything-V2-Small | Good | ~1.5 GB | Fast | | Depth-Anything-Small | Good | ~1.5 GB | Fast | | Depth-Anything-V2-Base | Better | ~2.5 GB | Medium | | Depth-Anything-Base | Better | ~2.5 GB | Medium | | Depth-Anything-V3-Small | Excellent | ~2.0 GB | Fast | | Depth-Anything-V3-Base | Superior | ~2.5 GB | Medium | | Depth-Anything-Large | Best | ~4.0 GB | Slow |
Troubleshooting Guide
Common Issues and Solutions
Model Download Issues
- Error: "Failed to load model" or "Model not found"
- Solution:
-
Check your internet connection
-
Try authenticating with Hugging Face:
pip install huggingface_hub huggingface-cli login -
Try a different model (e.g., switch to Depth-Anything-V2-Small)
-
Check the ComfyUI console for detailed error messages
-
CUDA Out of Memory Errors
- Error: "CUDA out of memory" or node shows red error image
- Solution:
- Try a smaller model (Depth-Anything-V2-Small uses the least memory)
- Enable the
force_cpuoption (slower but uses less VRAM) - Reduce the size of your input image
- Close other VRAM-intensive applications
Node Not Appearing in ComfyUI
- Solution:
-
Check your ComfyUI console for error messages
-
Verify that all dependencies are installed:
pip install transformers>=4.20.0 Pillow>=9.1.0 numpy>=1.23.0 timm>=0.6.12 -
Try restarting ComfyUI
-
Check that the node files are in the correct directory
-
Node Returns Original Image or Black Image
- Solution:
- Try enabling the
force_reloadoption - Check the ComfyUI console for error messages
- Try using a different model
- Make sure your input image is valid (not corrupted or empty)
- Try restarting ComfyUI
- Try enabling the
Slow Performance
- Solution:
- Use a smaller model (Depth-Anything-V2-Small is fastest)
- Reduce input image size
- If using CPU mode, consider using GPU if available
- Close other applications that might be using GPU resources
Where to Get Help
- Create an issue on the GitHub repository
- Check the ComfyUI console for detailed error messages
- Visit the ComfyUI Discord for community support
License
This project is licensed under the Apache License.