This repository provides integration of GPT-4 and Claude 3 models into ComfyUI, allowing for both image and text-based interactions within the ComfyUI workflow.
This repository provides integration of GPT-4 and Claude 3 models into ComfyUI, allowing for both image and text-based interactions within the ComfyUI workflow.
Clone this repository into the custom_nodes folder of comfyui:
git clone https://github.com/AppleBotzz/ComfyUI_LLMVISION.git
Install the required dependencies:
pip install -r requirements.txt
Import the workflow.json
file into your ComfyUI workspace.
Open the ComfyUI interface and navigate to your workspace.
Locate the imported nodes in the node library under the AppleBotzz Category:
Drag and drop the desired node into your workflow.
Configure the node settings, such as API keys, model selection, and prompts.
Connect the node to other nodes in your workflow as needed.
Run the workflow to execute the LLM-based tasks.
openai_api_key
: Your OpenAI API key for accessing GPT-4 models.claude_api_key
: Your Anthropic API key for accessing Claude-3 models.endpoint
: The API endpoint URL (default: OpenAI or Anthropic endpoints).model
: Select the specific model to use for each node.prompt
: Customize the prompt for the LLM-based task.max_token
: Set the maximum number of tokens for the generated response.Make sure to keep your API keys secure and do not share them publicly.