ComfyUI Node: 🦙 Ollama Image Describer 🦙

Authored by alisson-anjos

Created

Updated

36 stars

Category

Ollama

Inputs

model
  • llava:7b-v1.6-vicuna-q2_K (Q2_K, 3.2GB)
  • llava:7b-v1.6-mistral-q2_K (Q2_K, 3.3GB)
  • llava:7b-v1.6 (Q4_0, 4.7GB)
  • llava:13b-v1.6 (Q4_0, 8.0GB)
  • llava:34b-v1.6 (Q4_0, 20.0GB)
  • llava-llama3:8b (Q4_K_M, 5.5GB)
  • llava-phi3:3.8b (Q4_K_M, 2.9GB)
  • moondream:1.8b (Q4, 1.7GB)
  • moondream:1.8b-v2-q6_K (Q6, 2.1GB)
  • moondream:1.8b-v2-fp16 (F16, 3.7GB)
custom_model STRING
api_host STRING
timeout INT
temperature FLOAT
top_k INT
top_p FLOAT
repeat_penalty FLOAT
seed_number INT
max_tokens INT
keep_model_alive INT
images IMAGE
system_context STRING
prompt STRING

Outputs

STRING

Extension: ComfyUI-Ollama-Describer

A ComfyUI extension that allows you to use some LLM templates provided by Ollama, such as Gemma, Llava (multimodal), Llama2, Llama3 or Mistral

Authored by alisson-anjos

Run ComfyUI workflows in the Cloud!

No downloads or installs are required. Pay only for active GPU usage, not idle time. No complex setups and dependency issues

Learn more