ComfyUI Extension: ComfyUI-EACloudNodes
A collection of ComfyUI custom nodes for interacting with various cloud services. These nodes are designed to work with any ComfyUI instance, including cloud-hosted environments (such as MimicPC) where users may have limited system access.
Custom Nodes (3)
README
ComfyUI-EACloudNodes
A collection of ComfyUI custom nodes for interacting with various cloud services, such as LLM providers Groq and OpenRouter. These nodes are designed to work with any ComfyUI instance, including cloud-hosted environments where users may have limited system access.
Installation
Use ComfyUI-Manager or to manually install:
- Clone this repository into your ComfyUI custom_nodes folder:
cd ComfyUI/custom_nodes
git clone https://github.com/EnragedAntelope/ComfyUI-EACloudNodes
- Install required packages:
cd ComfyUI-EACloudNodes
pip install -r requirements.txt
- Restart ComfyUI
Current Nodes
Common Features Across LLM Nodes
The following parameters are available in both OpenRouter and Groq nodes:
Common Parameters:
api_key: ⚠️ Your API key (Note: key will be visible in workflows)model: Model selection (dropdown or identifier)system_prompt: Optional system context settinguser_prompt: Main prompt/question for the modeltemperature: Controls response randomness (0.0-2.0)top_p: Nucleus sampling threshold (0.0-1.0)frequency_penalty: Token frequency penalty (-2.0 to 2.0)presence_penalty: Token presence penalty (-2.0 to 2.0)response_format: Choose between text or JSON object outputseed_mode: Control reproducibility (Fixed, Random, Increment, Decrement)max_retries: Maximum retry attempts (0-5) for recoverable errorsimage_input: Optional image for vision-capable modelsadditional_params: Optional JSON object for extra model parameters
Common Outputs:
response: The model's generated text or JSON responsestatus: Detailed information about the request, including model used and token countshelp: Static help text with usage information and repository URL
OpenRouter Chat
Interact with OpenRouter's API to access various AI models for text and vision tasks.
Model Selection
- Choose from a curated list of free models in the dropdown
- Select "Manual Input" to use any OpenRouter-supported model
- When using Manual Input, enter the full model identifier in the manual_model field
- Supports both text and vision models (vision-capable models indicated in name)
Available Free Models
- Google Gemini models (various versions)
- Meta Llama models (including vision-capable versions)
- Microsoft Phi models
- Mistral, Qwen, and other open models
- Full list viewable in node dropdown
Parameters:
api_key: ⚠️ Your OpenRouter API key (Get from https://openrouter.ai/keys)model: Select from available models or choose "Manual Input"manual_model: Enter custom model name (only used when "Manual Input" is selected)base_url: OpenRouter API endpoint URL (default: https://openrouter.ai/api/v1/chat/completions)system_prompt: Optional system context settinguser_prompt: Main prompt/question for the modeltemperature: Controls response randomness (0.0-2.0)top_p: Nucleus sampling threshold (0.0-1.0)top_k: Vocabulary limit (1-1000)max_tokens: Maximum number of tokens to generatefrequency_penalty: Token frequency penalty (-2.0 to 2.0)presence_penalty: Token presence penalty (-2.0 to 2.0)repetition_penalty: Repetition penalty (1.0-2.0)response_format: Choose between text or JSON object outputseed_mode: Control reproducibility (Fixed, Random, Increment, Decrement)max_retries: Number of retry attempts for recoverable errors (0-5)image_input: Optional image for vision-capable modelsadditional_params: Optional JSON object for extra model parameters
Outputs:
response: The model's generated text or JSON responsestatus: Detailed information about the request, including model used and token countshelp: Static help text with usage information and repository URL
OpenRouter Models Node
Query and filter available models from OpenRouter's API.
Features:
- Retrieve complete list of available models
- Filter models using custom search terms (e.g., 'free', 'gpt', 'claude')
- Sort models by name, pricing, or context length
- Detailed model information including pricing and context length
- Easy-to-read formatted output
Parameters:
api_key: ⚠️ Your OpenRouter API key (Note: key will be visible in workflows)filter_text: Text to filter modelssort_by: Sort models by name, pricing, or context lengthsort_order: Choose ascending or descending sort order
Groq Chat
Interact with Groq's API for ultra-fast inference with various LLM models.
Features:
- High-speed inference with Groq's optimized hardware
- Dropdown selection of current Groq models
- Manual model input option for future or custom models
- Support for vision-capable models (models with 'vision' in their name)
- Real-time token usage tracking
- Automatic retry mechanism for recoverable errors
- Toggle for system prompt sending (required for vision models)
Usage:
- Get your API key from console.groq.com/keys
- Add the Groq node to your workflow
- Enter your API key
- Select a model from the dropdown or choose "Manual Input" to specify a custom model
- Configure your prompts and parameters
- Connect outputs to view responses, status, and help information
Parameters:
model: Select from available Groq models or choose "Manual Input"manual_model: Enter custom model name (only used when "Manual Input" is selected)user_prompt: Main prompt/question for the modelsystem_prompt: Optional system context settingsend_system: Toggle system prompt sending (must be off for vision models)temperature: Controls response randomness (0.0-2.0)top_p: Nucleus sampling threshold (0.0-1.0)max_completion_tokens: Maximum number of tokens to generatefrequency_penalty: Token frequency penalty (-2.0 to 2.0)presence_penalty: Token presence penalty (-2.0 to 2.0)response_format: Choose between text or JSON object outputseed_mode: Control reproducibility (Fixed, Random, Increment, Decrement)max_retries: Number of retry attempts for recoverable errors (0-5)image_input: Optional image for vision-capable modelsadditional_params: Optional JSON object for extra model parameters
Outputs:
response: The model's generated text or JSON responsestatus: Detailed information about the request, including model used and token countshelp: Static help text with usage information and repository URL
Vision Analysis
- Add an LLM node to your workflow
- Choose a vision-capable model
- Connect an image output to the
image_input - For Groq vision models, toggle 'send_system' to 'no'
- Add your prompt about the image
- Connect outputs to view response, status, and help information
Usage Guide
Basic Text Generation
- Add an LLM node (OpenRouter or Groq) to your workflow
- Set your API key
- Choose a model
- (Optional) Set system prompt for context/behavior
- Enter your prompt in the
user_promptfield - Connect the node's output to a Text node to display results
Vision Analysis
- Add an LLM node to your workflow
- Choose a vision-capable model
- Connect an image output to the
image_input - Add your prompt about the image
- Connect outputs to view both the response and status
Advanced Usage
- Use
system_promptto set context or behavior - Adjust temperature and other parameters to control response style
- Select
json_objectformat for structured outputs - Monitor token usage via the status output
- Chain multiple nodes for complex workflows
- Use seed_mode for reproducible outputs (Fixed) or controlled variation (Increment/Decrement)
- Use
additional_paramsto set model-specific parameters in JSON format:{ "min_p": 0.1, "top_a": 0.8 }
Parameter Optimization Tips
- Lower temperature (0.1-0.3) for more focused responses
- Higher temperature (0.7-1.0) for more creative outputs
- Lower top_p (0.1-0.3) for more predictable word choices
- Higher top_p (0.7-1.0) for more diverse vocabulary
- Use presence_penalty to reduce repetition
- Monitor token usage to optimize costs
- Save API key in the node for reuse in workflows
- Use seed_mode for reproducible outputs (Fixed) or controlled variation (Increment/Decrement)
Error Handling
Both nodes provide detailed error messages for common issues:
- Missing or invalid API keys
- Model compatibility issues
- Image size and format requirements
- JSON format validation
- Token limits and usage
- API rate limits and retries
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.