ComfyUI Node: ✨ Auto-LLM-Vision

Authored by xlinx

Created

Updated

21 stars

Category

🧩 Auto-Prompt-LLM

Inputs

clip CLIP
image_to_llm_vision IMAGE
llm_vision_result_append_enabled BOOLEAN
text_prompt_postive STRING
text_prompt_negative STRING
llm_keep_your_prompt_ahead BOOLEAN
llm_recursive_use BOOLEAN
llm_apiurl STRING
llm_apikey STRING
llm_api_model_name STRING
llm_vision_max_token INT
llm_vision_tempture FLOAT
llm_vision_system_prompt STRING
llm_vision_ur_prompt STRING
llm_before_action_cmd_feedback_type
  • Pass
  • just-call
  • LLM-USER-PROMPT
  • LLM-VISION-IMG_PATH
llm_before_action_cmd STRING
llm_post_action_cmd_feedback_type
  • Pass
  • just-call
  • LLM-USER-PROMPT
  • LLM-VISION-IMG_PATH
llm_post_action_cmd STRING

Outputs

CONDITIONING

CONDITIONING

STRING

STRING

STRING

STRING

STRING

Extension: ComfyUI-decadetw-auto-prompt-llm

NODES: Auto-LLM-Text-Vision, Auto-LLM-Text, Auto-LLM-Vision

Authored by xlinx

Run ComfyUI workflows in the Cloud!

No downloads or installs are required. Pay only for active GPU usage, not idle time. No complex setups and dependency issues

Learn more