ComfyUI Extension: ComfyUI_Imagen
A custom node for ComfyUI that leverages the Google Cloud Vertex AI Imagen API to generate and edit images.
Custom Nodes (0)
README
ComfyUI_Imagen
A custom node for ComfyUI that leverages the Google Cloud Vertex AI Imagen API to generate and edit images.
Installation
- Clone this repository into your
custom_nodesfolder.cd ComfyUI/custom_nodes git clone https://github.com/ru4ls/ComfyUI_Imagen.git - Install the required dependencies:
pip install -r ComfyUI_Imagen/requirements.txt
Project Setup
To use this node, you need a Google Cloud Project with the Vertex AI API enabled.
-
Enable the Vertex AI API: Follow the instructions in the Google Cloud documentation to enable the API for your project.
-
Authenticate Your Environment: This node uses Application Default Credentials (ADC) to securely authenticate with Google Cloud. Run the following
gcloudcommand in your terminal to log in and set up your credentials. This is a one-time setup.gcloud auth application-default loginThe node authenticates directly through the installed Python libraries and does not depend on the
gcloud.cmdexecutable being available in your system's PATH at runtime. -
Create a
.envfile: In theComfyUI_Imagen/config/directory, create a.envfile by copying theconfig/.env.examplefile. Replace the placeholder values with your Google Cloud project ID and desired location.# ComfyUI_Imagen/config/.env PROJECT_ID="YOUR_PROJECT_ID_HERE" LOCATION="YOUR_LOCATION_HERE"
Nodes
This package provides a single unified node, Google Imagen, which can perform both text-to-image generation and image editing depending on the inputs provided.
Inputs
prompt(STRING): The text prompt for image generation or manipulation.model_name(STRING): The generation model to use. The list is populated fromdata/models.json.image(IMAGE, optional): An optional input for a base image. Connecting an image switches the node to image-editing mode.mask(MASK, optional): An optional mask for inpainting or outpainting. If no mask is provided in editing mode, the node will perform a mask-free, prompt-based image edit.aspect_ratio(STRING, optional): The desired aspect ratio for text-to-image generation.edit_mode(STRING, optional): The editing mode to use when an image is provided. Options areinpaintingandoutpainting.seed(INT, optional): A seed for reproducibility. See the Seed and Watermarking section below for important usage details.add_watermark(BOOLEAN, optional): A toggle to control watermarking andseedbehavior.
Outputs
image(IMAGE): The generated or edited image.
Features and Limitations
Seed and Watermarking
The Google Imagen API does not support using a seed when the automatic watermark feature is enabled.
- For Text-to-Image: To use a
seed, you must set theadd_watermarktoggle toFalse. Whenadd_watermarkisTrue(the default), theseedvalue is ignored. - For Image Editing: The API does not support disabling watermarks. However, to maintain consistent behavior, the
add_watermarktoggle still controls theseed. Set it toFalseto use aseedfor your edits.
Image Editing
The node supports both masked and mask-free image editing.
- With a Mask: Provide an
imageand amaskto perform standard inpainting or outpainting. - Without a Mask: Provide only an
imageand aprompt. The node will perform a mask-free, prompt-guided edit of the entire image.
Example Usage
Text to Image Generation
- Add the
Google Imagennode to your workflow. - Enter a
prompt. - Ensure no
imageinput is connected. - To use a seed, set
add_watermarktoFalse. - Connect the output
imageto aPreviewImageorSaveImagenode.

Inpainting
- Add the
Google Imagennode. - Connect a
LoadImagenode to theimageinput. - Optional Create a mask and connect it to the
maskinput. - Enter a
promptdescribing the desired changes. - Set
edit_modetoinpainting. - Connect the output
imageto aPreviewImageorSaveImagenode.

Outpainting
This workflow is identical to inpainting, but with edit_mode set to outpainting and a mask that defines the area to be extended.
Changelog
2025-09-14: Refactored Authentication and Node Logic
- New Authentication: Replaced insecure
gcloudsubprocess calls with the officialgoogle-authPython library, using Application Default Credentials (ADC). This improves security and removes the dependency on thegcloudexecutable being in the system's PATH. - Unified Node: Consolidated the workflow into a single, powerful
Google Imagennode that intelligently switches between text-to-image and image-editing modes based on whether an image is provided. - Watermark & Seed Control: Added an
add_watermarktoggle. Disabling this allows theseedparameter to be used for reproducible results. - Enhanced Image Editing: The node now supports both masked (inpainting/outpainting) and mask-free, prompt-guided image editing.
- Centralized Configuration: The
.envfile is now loaded from the/configdirectory, and credential loading is handled centrally for a cleaner and more robust startup. - Improved Error Handling: Implemented error handling that prevents the ComfyUI workflow from crashing on API or configuration errors, instead printing clear messages to the console.
License
This project is licensed under the MIT License - see the LICENSE file for details.