Advanced LLM driven node with many custom instructions, including node finder, expert prompter and json converter.
An advanced Chat Node for ComfyUI that integrates large language models to build text-driven applications and automate data processes (RAGs), enhancing prompt responses by optionally incorporating real-time web search, linked content extraction, and custom agent instructions. It supports both OpenAI’s GPT-like models and alternative models served via a local Ollama API. At its core, two essential instructions—the Comfy Node Finder, which retrieves relevant custom nodes from a the ComfyUi- Manager Custom-node-JSON database based on your queries, and the Smart Assistant, which ingests your workflow JSON to deliver tailored, actionable recommendations—drive its powerful, context-aware functionality. Additionally, a range of other agents such as Flux Prompter, several Custom Instructors, a Python debugger and scripter, and many more further extend its capabilities.
<img width="1263" alt="Bildschirmfoto 2025-02-06 um 11 05 19" src="https://github.com/user-attachments/assets/d4622bfe-c358-4f51-8dc4-cbf3d9880a70" /> ---{additional_text}
, the node replaces it with the provided additional text. Otherwise, any additional text is appended.num_search_results
), and appends the extracted content to the prompt.config.json
file and fetches additional models from an Ollama API endpoint (http://127.0.0.1:11434/api/tags
).gpt
, o1
, or o3
, the node uses the OpenAI API (configured via an API key and base URL).custom_instructions
directory for .txt
files and makes them available as options.custom-node-list.json
) to aid in finding specific nodes.Console_log
), detailed information about the prompt augmentation process and API calls is printed to the console.ComfyUI Node Assistant
An advanced Agent that analyzes your specific use case and strictly uses the provided ../ComfyUI-Manager/custom-node-list.json reference to deliver consistent structured, ranked recommendations featuring node names, detailed descriptions, categories, inputs/outputs, and usage notes; it dynamically refines suggestions based on your requirements, ensuring you access both top-performing and underrated nodes categorized as Best Image Processing Nodes, Top Text-to-Image Nodes, Essential Utility Nodes, Best Inpainting Nodes, Advanced Control Nodes, Performance Optimization Nodes, Hidden Gems, Latent Processing Nodes, Mathematical Nodes, Noise Processing Nodes, Randomization Nodes, and Display & Show Nodes for optimal functionality, efficiency, and compatibility.
<img width="1151" alt="image" src="https://github.com/user-attachments/assets/dbf27e20-4eff-454c-9a9a-16045e67bae3" />ComfyUI Smart Assistant
ComfyUI Smart Assistant Instruction: An advanced, context-aware AI integration that ingests your workflow JSON to thoroughly analyze your unique use case and deliver tailored, high-impact recommendations presented as structured, ranked insights—with each recommendation accompanied by names, detailed descriptions, categorical breakdowns, input/output specifications, and usage notes—while dynamically adapting to your evolving requirements through in-depth comparisons, alternative methodologies, and layered workflow enhancements; its robust capabilities extend to executing wildcard searches, deploying comprehensive error-handling strategies, offering real-time monitoring insights, and providing seamless integration guidance, all meticulously organized into key sections such as "Best Workflow Enhancements," "Essential Automation Tools," "Performance Optimization Strategies," "Advanced Customization Tips," "Hidden Gems & Lesser-Known Features," "Troubleshooting & Debugging," "Integration & Compatibility Advice," "Wildcard & Exploratory Searches," "Security & Compliance Measures," and "Real-Time Feedback & Monitoring"—ensuring peak functionality, efficiency, and compatibility while maximizing productivity and driving continuous improvement.
<img width="662" alt="image" src="https://github.com/user-attachments/assets/3230a6cf-a783-4914-ba8f-f580c2f971d0" />Polymath Scraper
An automated web scraper node designed for seamless gallery extraction, allowing users to input a gallery website URL and retrieve image data efficiently. Built on gallery-dl, it supports all websites listed in the official repository. with key keatures such as:
Ideal for creating large, labeled datasets for AI model training, reducing manual effort and streamlining workflow efficiency.
The node exposes a range of configurable inputs:
{additional_text}
placeholders.config.json
and Ollama API).Clone the Repository:
git clone https://github.com/lum3on/comfyui_LLM_Polymath.git
cd comfyui_LLM_Polymath
Install Dependencies:
The node automatically attempts to install missing Python packages (such as googlesearch
, requests
, and bs4
). However, you can also manually install dependencies using:
pip install -r requirements.txt
Set the key in your Environment Variables: create a .env file in your comfy root folder and set your api-key in the file like this:
OPENAI_API_KEY="your_api_key_here"
Below is an updated section you can add to your README to explain that once a model is downloaded via Ollama, it will automatically appear in the model dropdown after restarting Comfy:
OLAMA (Ollama) enables you to run large language models locally with a few simple commands. Follow these instructions to install OLAMA and download models.
Download the installer from the official website or install via Homebrew:
brew install ollama
Run the installation script directly from your terminal:
curl -fsSL https://ollama.com/install.sh | sh
Visit the Ollama Download Page and run the provided installer.
Once OLAMA is installed, you can easily pull and run models. For example, to download the lightweight Gemma 2B model:
ollama pull gemma:2b
After downloading, you can start interacting with the model using:
ollama run gemma:2b
For a full list of available models (including various sizes and specialized variants), please visit the official Ollama Model Library.
After you download a model via Ollama, it will automatically be listed in the model dropdown in Comfy after you restart it. This seamless integration means you don’t need to perform any additional configuration—the model is ready for use immediately within Comfy.
ollama pull
command or use the run command and the model gets auto downloaded.ollama run <model-name>
to start a REPL and interact with it.By following these steps, you can quickly set up OLAMA on your machine and begin experimenting with different large language models locally.
For further details on model customization and advanced usage, refer to the official documentation at Ollama Docs.
The following features are planned for the next Update.
This project is licensed under the MIT License. See the LICENSE file for details.
This node integrates several libraries and APIs to deliver an advanced multimodal, web-augmented chat experience. Special thanks to all contributors and open source projects that made this work possible.
For any questions or further assistance, please open an issue on GitHub or contact the maintainer.