ComfyUI Extension: AI4ArtsEd Ollama Prompt Node
Experimental nodes for ComfyUI. For more, see a/https://kubi-meta.de/ai4artsed A custom ComfyUI node for stylistic and cultural transformation of input text using local LLMs served via Ollama. This node allows you to combine a free-form prompt (e.g. translation, poetic recoding, genre shift) with externally supplied text in the ComfyUI graph. The result is processed via an Ollama-hosted model and returned as plain text.
Custom Nodes (0)
README
AI4ArtsEd Prompt Nodes for ComfyUI
Experimental prompt transformation nodes for ComfyUI, developed as part of the AI4ArtsEd research initiative. These components support stylistic and cultural translation of prompts and images for use in educational, artistic, and cultural research contexts.
More information: https://kubi-meta.de/ai4artsed
π§ Features
- Accepts
STRING
orIMAGE
input from any upstream node. - Structured control of transformation via
style_prompt
,context
, andinstruction
. - Supports both:
- Local inference using Ollama,
- Remote APIs via OpenRouter.
- Produces output as
STRING
(for use in CLIP, chaining, or save/display). - Optional debug logging and model unloading for Ollama.
- Optional API key file handling for OpenRouter.
- Includes an image-to-text analysis node for multimodal LLMs.
- New: Flexible text distribution node for routing strings to multiple downstream paths.
π§Ή Available Nodes
1. AI4ArtsEd Ollama Prompt
β Uses local Ollama models (GPU/CPU) for cultural prompt rewriting.
2. AI4ArtsEd OpenRouter Prompt
β Uses hosted LLMs via OpenRouter for stylistic transformations.
3. AI4ArtsEd OpenRouter/Ollama ImageAnalysis
β Sends image input (as base64-encoded JPEG) to multimodal LLMs via OpenRouter (e.g. GPT-4V) or Ollama with cultural-instruction prompts.
4. AI4ArtsEd Text Remix
β Selectively or randomly combines 1β12 text inputs.
Inputs:
mode
:"random"
,"all"
, or"1"
β"12"
text_1
totext_12
: Optional style or instruction inputs
Output:
text
: Combined or selected text based on mode.
5. AI4ArtsEd Random Instruction Generator
β Generates four creative, symbolic, or surreal instruction phrases to be used as part of prompt contexts.
Input:
text
(STRING) β typically a core concept or message
Outputs:
instruction_1
toinstruction_4
(STRING)
6. AI4ArtsEd Random Artform Generator
β Generates four culturally or stylistically distinct reinterpretation prompts, suitable for LLM context use.
Input:
text
(STRING)
Outputs:
artform_1
toartform_4
(STRING)
7. AI4ArtsEd Random Language Selector
β Selects 12 random languages from a curated list including both major and marginalized world languages, suitable for translation-based workflows such as stille_post
.
Input:
- (none)
Outputs:
language_1
tolanguage_12
(STRING): Language names selected from a diverse set of model-supported languages (including Yoruba, MΔori, Guarani, etc.)
8. Secure Access to OpenRouter API Key
OpenRouter.ai is a service that provides access to LLM/generativeAI systems. This is needed because LLM such das DeepSeek R1 are much to big to be run on standalone PCs. OpenRouter.ai-based nodes require an API key to access hosted models. For security, the key should never be saved in workflows. In order to safely store you OpenRouter Key:
- Find the Folder "custom_nodes/AI4ArtsEd/ on your system.
- Open the file "openrouter.key.template" with a text editor.
- Replace the placeholder text with your Key. Do not add anything else.
- Remove the ".template" part of the filename, so that it reads "openrouter.key".
- Make sure you do not share or upload this file. If you use a Github repository, it is strongly recommended to add "openrouter.key" to the ".gitignore".
Of course, it is possible to use other service providers than OpenRouter.ai. In order to do so, the .py files have to be modified accordingly.
β
Node: Secure Access to OpenRouter API Key
This node:
- provides the API key as a
STRING
output, - retrieves it from the
OPENROUTER_KEY
environment variable, - is intended to be connected to
ai4artsed_openrouter
or other LLM-related nodes.
CREDITS
This project includes adapted code from the following third-party sources:
-
ComfyUI-Show-Text by fairy-root
https://github.com/fairy-root/ComfyUI-Show-Text
Licensed under the MIT License.
Adapted inai4artsed_show_text.py
to enable text output display in ComfyUI workflows. -
comfyui-ollama by stavsap
https://github.com/stavsap/comfyui-ollama
Licensed under the Apache License 2.0.
Adapted inai4artsed_openrouter_imageanalysis.py
and related files to support multimodal LLM image analysis.
Seelicense/THIRD_PARTY_LICENSES/comfyui-ollama_APACHE.txt
for license details.