ComfyUI Node: 🦙 Ollama Text Describer 🦙

Authored by alisson-anjos

Created

Updated

66 stars

Category

Ollama

Inputs

model
  • deepscaler:1.5b (F16, 3.6GB)
  • deepseek-r1:32b (Q4_K_M, 20.0GB)
  • deepseek-r1:14b (Q4_K_M, 9.0GB)
  • deepseek-r1:8b (Q4_K_M, 4.9GB)
  • deepseek-r1:7b (Q4_K_M, 4.7GB)
  • deepseek-r1:1.5b (Q4_K_M, 1.1GB)
  • qwen2:0.5b (Q4_0, 352MB)
  • qwen2.5:0.5b-instruct (Q4_K_M, 398MB)
  • qwen2:1.5b (Q4_0, 935MB)
  • qwen2.5:1.5b-instruct (Q4_K_M, 986MB)
  • qwen2.5:7b (Q4_K_M, 4.7GB)
  • qwen2.5:7b-instruct (Q4_K_M, 4.7GB)
  • qwen2:7b (Q4_0, 4.4GB)
  • gemma:2b (Q4_0, 1.7GB)
  • gemma:7b (Q4_0, 5.0GB)
  • gemma2:9b (Q4_0, 5.4GB)
  • phi3:mini (3.82b, Q4_0, 2.2GB)
  • phi3:medium (14b, Q4_0, 7.9GB)
  • phi4:14b (Q4_K_M, 9.1GB)
  • llama2:7b (Q4_0, 3.8GB)
  • llama2:13b (Q4_0, 7.4GB)
  • llama3:8b (Q4_0, 4.7GB)
  • llama3:8b-text-q6_K (Q6_K, 6.6GB)
  • llama3.1:8b (Q4_0, 4.7GB)
  • llama3.1:8b-instruct-q4_0 (Q4_0, 4.7GB)
  • llama3.1:8b-instruct-q8_0 (Q8_0, 8.5GB)
  • llama3.2:1b (Q8_0, 1.3GB)
  • llama3.2:1b-instruct-fp16 (F16, 2.5GB)
  • llama3.2:1b-instruct-q8_0 (Q8_0, 1.3GB)
  • llama3.2:3b (Q4_K_M, 2.0GB)
  • llama3.2:3b-instruct-q4_0 (Q4_0, 1.9GB)
  • llama3.2:3b-instruct-q8_0 (Q8_0, 3.4GB)
  • mistral:7b (Q4_0, 4.1GB)
custom_model STRING
api_host STRING
timeout INT
temperature FLOAT
top_k INT
top_p FLOAT
repeat_penalty FLOAT
seed_number INT
num_ctx INT
max_tokens INT
keep_model_alive INT
system_context STRING
prompt STRING
structured_output_format STRING

Outputs

STRING

Extension: ComfyUI-Ollama-Describer

A ComfyUI extension that allows you to use some LLM templates provided by Ollama, such as Gemma, Llava (multimodal), Llama2, Llama3 or Mistral

Authored by alisson-anjos

Run ComfyUI workflows in the Cloud!

No downloads or installs are required. Pay only for active GPU usage, not idle time. No complex setups and dependency issues

Learn more