ComfyUI Extension: Ollama and Llava Vision integration for ComfyUI
Ollama and Llava / vision integration for ComfyUI
Custom Nodes (1)
README
ComfyUI Ollama Integration
This repository, maintained by fairy-root, provides custom nodes for ComfyUI, integrating with the Ollama API for language model interactions and offering text manipulation capabilities.
<div align="center"> <img src="imgs/nodes.png"> </div>Installation Guide
Features
- Ollama Chat: Interact with Ollama's language models, including streaming and logging capabilities.
- Concatenate Text LLMs: Concatenate instructional text with prompts, offering customizable text formatting.
- Ollama Vision: Loads Llava model and interacts with loaded images based on the user prompts.
Installation
Requirements
- Python 3.x
- ComfyUI
Steps
- Installing the node:
- Goto
ComfyUI/custom_nodes
dir in terminal(cmd) - Clone the repository
git clone https://github.com/fairy-root/comfyui-ollama-llms.git
- Restart ComfyUI
- Install required Python packages:
pip install ollama
Getting Started
Obtaining Ollama Model
Phi3 is just an example since it is small and fast. You can choose any other models as well.
- To use the Load Ollama LLms node, you'll need to install Ollama. Visit Ollama and install the Ollama App for your OS, then in the terminal use the command:
orollama pull phi3
ollama run phi3
Donation
Your support is appreciated:
- USDt (TRC20):
TGCVbSSJbwL5nyXqMuKY839LJ5q5ygn2uS
- BTC:
13GS1ixn2uQAmFQkte6qA5p1MQtMXre6MT
- ETH (ERC20):
0xdbc7a7dafbb333773a5866ccf7a74da15ee654cc
- LTC:
Ldb6SDxUMEdYQQfRhSA3zi4dCUtfUdsPou
Author and Contact
- GitHub: FairyRoot
- Telegram: @FairyRoot
License
This project is licensed under the MIT License. See the LICENSE file for details.
Contributing
Contributions are welcome! Please open an issue or submit a pull request for any improvements or features.