ComfyUI Node: 🦙 Ollama Text Describer 🦙

Authored by alisson-anjos

Created

Updated

36 stars

Category

Ollama

Inputs

model
  • qwen2:0.5b (Q4_0, 352MB)
  • qwen2:1.5b (Q4_0, 935MB)
  • qwen2:7b (Q4_0, 4.4GB)
  • gemma:2b (Q4_0, 1.7GB)
  • gemma:7b (Q4_0, 5.0GB)
  • gemma2:9b (Q4_0, 5.4GB)
  • llama2:7b (Q4_0, 3.8GB)
  • llama2:13b (Q4_0, 7.4GB)
  • llama3:8b (Q4_0, 4.7GB)
  • llama3:8b-text-q6_K (Q6_K, 6.6GB)
  • mistral:7b (Q4_0, 4.1GB)
custom_model STRING
api_host STRING
timeout INT
temperature FLOAT
top_k INT
top_p FLOAT
repeat_penalty FLOAT
seed_number INT
max_tokens INT
keep_model_alive INT
system_context STRING
prompt STRING

Outputs

STRING

Extension: ComfyUI-Ollama-Describer

A ComfyUI extension that allows you to use some LLM templates provided by Ollama, such as Gemma, Llava (multimodal), Llama2, Llama3 or Mistral

Authored by alisson-anjos

Run ComfyUI workflows in the Cloud!

No downloads or installs are required. Pay only for active GPU usage, not idle time. No complex setups and dependency issues

Learn more