A collection of ComfyUI custom nodes for interacting with various cloud services. These nodes are designed to work with any ComfyUI instance, including cloud-hosted environments (such as MimicPC) where users may have limited system access.
A collection of ComfyUI custom nodes for interacting with various cloud services, such as LLM providers Groq and OpenRouter. These nodes are designed to work with any ComfyUI instance, including cloud-hosted environments where users may have limited system access.
Use ComfyUI-Manager or to manually install:
cd ComfyUI/custom_nodes
git clone https://github.com/EnragedAntelope/ComfyUI-EACloudNodes
cd ComfyUI-EACloudNodes
pip install -r requirements.txt
The following parameters are available in both OpenRouter and Groq nodes:
api_key
: ⚠️ Your API key (Note: key will be visible in workflows)model
: Model selection (dropdown or identifier)system_prompt
: Optional system context settinguser_prompt
: Main prompt/question for the modeltemperature
: Controls response randomness (0.0-2.0)top_p
: Nucleus sampling threshold (0.0-1.0)frequency_penalty
: Token frequency penalty (-2.0 to 2.0)presence_penalty
: Token presence penalty (-2.0 to 2.0)response_format
: Choose between text or JSON object outputseed_mode
: Control reproducibility (Fixed, Random, Increment, Decrement)max_retries
: Maximum retry attempts (0-5) for recoverable errorsimage_input
: Optional image for vision-capable modelsadditional_params
: Optional JSON object for extra model parametersresponse
: The model's generated text or JSON responsestatus
: Detailed information about the request, including model used and token countshelp
: Static help text with usage information and repository URLInteract with OpenRouter's API to access various AI models for text and vision tasks.
api_key
: ⚠️ Your OpenRouter API key (Get from https://openrouter.ai/keys)model
: Select from available models or choose "Manual Input"manual_model
: Enter custom model name (only used when "Manual Input" is selected)base_url
: OpenRouter API endpoint URL (default: https://openrouter.ai/api/v1/chat/completions)system_prompt
: Optional system context settinguser_prompt
: Main prompt/question for the modeltemperature
: Controls response randomness (0.0-2.0)top_p
: Nucleus sampling threshold (0.0-1.0)top_k
: Vocabulary limit (1-1000)max_tokens
: Maximum number of tokens to generatefrequency_penalty
: Token frequency penalty (-2.0 to 2.0)presence_penalty
: Token presence penalty (-2.0 to 2.0)repetition_penalty
: Repetition penalty (1.0-2.0)response_format
: Choose between text or JSON object outputseed_mode
: Control reproducibility (Fixed, Random, Increment, Decrement)max_retries
: Number of retry attempts for recoverable errors (0-5)image_input
: Optional image for vision-capable modelsadditional_params
: Optional JSON object for extra model parametersresponse
: The model's generated text or JSON responsestatus
: Detailed information about the request, including model used and token countshelp
: Static help text with usage information and repository URLQuery and filter available models from OpenRouter's API.
api_key
: ⚠️ Your OpenRouter API key (Note: key will be visible in workflows)filter_text
: Text to filter modelssort_by
: Sort models by name, pricing, or context lengthsort_order
: Choose ascending or descending sort orderInteract with Groq's API for ultra-fast inference with various LLM models.
model
: Select from available Groq models or choose "Manual Input"manual_model
: Enter custom model name (only used when "Manual Input" is selected)user_prompt
: Main prompt/question for the modelsystem_prompt
: Optional system context settingsend_system
: Toggle system prompt sending (must be off for vision models)temperature
: Controls response randomness (0.0-2.0)top_p
: Nucleus sampling threshold (0.0-1.0)max_completion_tokens
: Maximum number of tokens to generatefrequency_penalty
: Token frequency penalty (-2.0 to 2.0)presence_penalty
: Token presence penalty (-2.0 to 2.0)response_format
: Choose between text or JSON object outputseed_mode
: Control reproducibility (Fixed, Random, Increment, Decrement)max_retries
: Number of retry attempts for recoverable errors (0-5)image_input
: Optional image for vision-capable modelsadditional_params
: Optional JSON object for extra model parametersresponse
: The model's generated text or JSON responsestatus
: Detailed information about the request, including model used and token countshelp
: Static help text with usage information and repository URLimage_input
user_prompt
fieldimage_input
system_prompt
to set context or behaviorjson_object
format for structured outputsadditional_params
to set model-specific parameters in JSON format:
{
"min_p": 0.1,
"top_a": 0.8
}
Both nodes provide detailed error messages for common issues:
Contributions are welcome! Please feel free to submit a Pull Request.