ComfyUI Extension: ComfyUI-ExternalAPI-Helpers
ComfyUI node for Flux Kontext Pro and Max models from Replicate
Custom Nodes (0)
README
ComfyUI-ExternalAPI-Helpers
A collection of powerful custom nodes for ComfyUI that connect your local workflows to closed-source AI models via their APIs. Use Google's Gemini, OpenAI's GPT-Image-1, and Black Forest Labs' FLUX models directly within ComfyUI.
Key Features
- FLUX Kontext Pro & Max: Image-to-image transformations using the FLUX models via the Replicate API.
- Gemini Chat: Google's powerful multimodal AI. Ask questions about an image, generate detailed descriptions or create prompts for other models. Supports thinking budget controls for applicable models.
- GPT Image Edit: OpenAI's
gpt-image-1
for prompt-based image editing and inpainting. Simply mask an area and describe the change you want to see. - Seamless Integration: All nodes are designed to work seamlessly with standard ComfyUI inputs (IMAGE, MASK, STRING) and outputs, allowing you to chain them into complex and creative workflows.
- Secure & Simple: Simply provide your API key in the node's input field to get started.
🚀 Installation
-
Navigate to your ComfyUI installation directory.
-
Go into the
custom_nodes
folder:cd ComfyUI/custom_nodes/
-
Clone this repository:
git clone https://github.com/Aryan185/ComfyUI-ExternalAPI-Helpers.git
-
Install the required Python packages. Navigate into the newly cloned directory and use pip to install the dependencies:
cd ComfyUI-ExternalAPI-Helpers pip install -r requirements.txt
-
Restart ComfyUI. After restarting, you should find the new nodes in the "Add Node" menu.
🔑 Prerequisites: API Keys
All nodes in this collection require API keys to function.
- FLUX Nodes (Replicate): You will need a Replicate API Token.
- Gemini Chat Node: You will need a Google AI Studio API Key.
- GPT Image Edit Node: You will need an OpenAI API Key.
You can paste your key directly into the api_key
or replicate_api_token
field on the corresponding node.
📚 Node Guide
Flux Kontext Pro / Max
These nodes allow you to transform an input image based on a text prompt. They are ideal for applying artistic styles or making significant conceptual changes to an existing image.
- Category:
image/generation
- Inputs:
image
: The source image to transform.prompt
: A text description of the desired output (e.g., "A vibrant Van Gogh painting", "Make this a 90s cartoon").replicate_api_token
: Your API token from Replicate.aspect_ratio
: The desired output aspect ratio.match_input_image
is highly recommended to preserve the original composition.output_format
:jpg
orpng
.safety_tolerance
: Adjust the content safety filter level.
- Output:
image
: The generated image.
Gemini Chat
A versatile node for text generation and image analysis. Use it to understand an image's content or to generate creative text for other nodes.
- Category:
AI/Gemini
- Inputs:
prompt
: The text prompt or question you want to ask the model.image
(Optional): An input image for the model to analyze.api_key
: Your API key from Google AI Studio.model
: The Gemini model to use (e.g.,gemini-2.5-pro
).system_instruction
(Optional): Provide context or rules for how the model should behave.temperature
: Controls the creativity of the output. Higher is more creative.thinking
: Enables the model's thinking/reasoning process (Gemini 2.5 Pro).
- Output:
response
: The text generated by the Gemini model.
GPT Image Edit
This node uses OpenAI's API to perform powerful, prompt-based inpainting and editing.
- Category:
image/ai
- Inputs:
image
: The source image to edit.mask
(Optional): A black and white mask. The model will edit the white area of the mask.prompt
: A description of the edit to perform (e.g., "Add a small red boat on the water", "Remove the person on the left").api_key
: Your API key from OpenAI....other_params
: Various quality and formatting options for the OpenAI API.
- Output:
image
: The edited image.
Note: If a mask is provided, the edits will be constrained to the masked region. If no mask is provided, the model will attempt to edit the entire image based on the prompt.
Acknowledgements
- The ComfyUI team for creating such a flexible and powerful platform.
- Google, OpenAI, and Black Forest Labs for developing these incredible models.
- Replicate for providing easy API access to a wide range of models.