ComfyUI Extension: LoRA Visualizer
A ComfyUI custom node that parses prompt text for LoRA tags and visualizes their metadata, including trigger words, strength values, thumbnail previews, and example images.
Custom Nodes (0)
README
LoRA Tools Suite - ComfyUI Custom Nodes
A comprehensive ComfyUI custom node package for LoRA management and intelligent prompt composition. Includes three powerful nodes for parsing, visualizing, and composing prompts with LoRA tags.
๐ Prerequisites
Required for all functionality: Install ComfyUI-Lora-Manager first. This provides the essential LoRA metadata that powers our visualization and discovery features.
Included Nodes
๐ LoRA Visualizer
Parses and visualizes LoRAo tags with rich metadata display
- โ
Consistent LoRA Parsing: Backend Python parsing handles both standard
<lora:name:strength>
and custom<wanlora:name:strength>
tags with identical logic - โ
Complex Name Support: Handles LoRA names with spaces, colons, and special characters (e.g.,
<lora:Detail Enhancer v2.0: Professional Edition:0.8>
) - โ Visual Thumbnails: Displays actual LoRA preview images loaded from metadata files
- โ Metadata Integration: Shows trigger words, model information, and base model details from ComfyUI LoRA Manager
- โ Separate Visual Lists: Standard LoRAs (blue theme) and WanLoRAs (orange theme) displayed in distinct, color-coded sections
- โ Canvas-based Rendering: Properly integrated with ComfyUI's node system using custom widget drawing
- โ Hover Gallery: Hover over thumbnails to see trigger words and example images
๐ฏ Prompt Composer
Looks at your installed Wan and SD lora, and intelligently discovers and composes LoRA tags using semantic matching
- โ Semantic LoRA Discovery: Uses sentence-transformers to find relevant LoRAs based on scene descriptions
- โ Natural Language Input: Describe your scene in plain English, get optimized LoRA suggestions
- โ Intelligent Weight Optimization: Automatically determines optimal LoRA strengths from metadata analysis
- โ Content-Aware Matching: Understands all content types without censorship
- โ Image & Video LoRA Support: Separate limits and handling for image vs video LoRAs
- โ Trigger Word Integration: Automatically includes relevant trigger words in output
- โ Style Mimicry: Learns from example prompts to match artistic styles
NOTE: If you aren't having any lora show up, try to reduce the threshold parameter! This adjusts the semantic matching threshold.
โ๏ธ Prompt Splitter (AI-Powered)
Intelligently splits prompts for image/video workflows using local LLM, while preserving lora/wanlora tags
- โ AI-Powered Analysis: Uses local Ollama LLM for intelligent prompt processing informed by best prompting practices for SD vs. Wan
- โ Dual Output Generation: Creates separate optimized prompts for image and video generation
- โ Content Preservation: Maintains all descriptive elements while optimizing for each medium
- โ Local Processing: No external API calls, complete privacy
- โ Flexible Model Support: Works with any Ollama-compatible model
- โ Structured Output: Clean, consistent formatting for downstream nodes
Example: Integration Composer -> Visualizer -> Prompt Splitter
-
Node 1: We generate a new prompt based on the lora installed on our machine using the
LoRA Prompt Composer
, providing it an initial prompt without lora references -
Node 2: We visualize the
lora
andwanlora
that were referenced via theLoRA Visualizer
-
Node 3: We split that prompt out into a Wan prompt and a SD prompt with the
Prompt Splitter
node. You would then pass these prompts as positive conditioning into SD generation and Wan generation workflows.
Shared Features
- โ Backend-Frontend Architecture: Python handles parsing and logic, JavaScript handles visualization
- โ Comprehensive Testing: Unit tests cover edge cases and complex name parsing
Installation
Option 1: ComfyUI Manager (Recommended)
- Install via ComfyUI Manager: Search for "LoRA Visualizer" in ComfyUI Manager
- Restart ComfyUI: All Python dependencies install automatically
- Install external dependencies: See Requirements section below
Option 2: Manual Installation
-
Clone repository to your ComfyUI
custom_nodes
directory:cd ComfyUI/custom_nodes git clone https://github.com/oliverswitzer/ComfyUI-Lora-Visualizer.git
-
Install Python dependencies (if not auto-installed):
cd ComfyUI-Lora-Visualizer pip install -r requirements.txt
-
Restart ComfyUI to load the custom node
-
Install external dependencies: See Requirements section below
Post-Installation
The nodes will appear in ComfyUI under:
- conditioning โ LoRA Visualizer
- conditioning โ Prompt Composer
- conditioning โ Prompt Splitter
Requirements & Dependencies
Node Prerequisites Matrix
| Node | LoRA Visualizer | Prompt Composer | Prompt Splitter | |------|----------------|-----------------|-----------------| | External Dependencies | None | None | Ollama (Local LLM) | | Python Dependencies | None | โ sentence-transformers<br/>โ scikit-learn | None | | ComfyUI Dependencies | ComfyUI LoRA Manager | ComfyUI LoRA Manager | None | | Automatic Installation | โ All included | โ All included | โ All included |
External Dependencies Setup
For Prompt Splitter Node Only
Ollama Installation (Required for AI-powered prompt splitting):
- Install Ollama: Download from ollama.ai
- Install the default model: Run
ollama pull nollama/mythomax-l2-13b:Q4_K_M
- Verify installation: Run
ollama list
to see installed models - Start Ollama service: Ollama runs automatically on most systems
Supported Ollama Models:
nollama/mythomax-l2-13b:Q4_K_M
(default, ~7GB)llama3.2:3b
(alternative, ~2GB)llama3.2:1b
(lightweight, ~1GB)qwen2.5:3b
(alternative, ~2GB)- Any other Ollama-compatible model
For LoRA Visualizer & Prompt Composer Nodes
ComfyUI LoRA Manager (Required for metadata):
- Install the ComfyUI LoRA Manager custom node
- Ensures LoRA metadata files are downloaded and maintained
- Required for both visualization and intelligent LoRA discovery features
Python Dependencies (Auto-Installed)
All Python dependencies are automatically installed when you install this node:
- sentence-transformers: For semantic LoRA matching (Prompt Composer)
- scikit-learn: For similarity calculations (Prompt Composer)
- pytest, black, pylint: Development tools
Note: The node ships with all Python dependencies pre-configured. ComfyUI will automatically install sentence-transformers
and scikit-learn
when you first load the Prompt Composer node. No manual Python package installation is required!
Usage
- Add the LoRA Visualizer node to your workflow
- Enter a prompt containing LoRA tags in the
prompt_text
input field - The node will automatically parse and display information about each LoRA found
Supported LoRA Tag Formats
- Standard LoRAs:
<lora:model_name:strength>
- Example:
<lora:landscape_v1:0.8>
- Example:
- WanLoRAs:
<wanlora:model_name:strength>
- Example:
<wanlora:Woman877.v2:1.0>
- Example:
Example Prompt
A beautiful portrait <lora:realistic_skin:0.7> of <wanlora:Woman877.v2:0.8> woman standing in a garden, highly detailed
This will display:
- Standard LoRAs: realistic_skin (strength: 0.7)
- WanLoRAs: Woman877.v2 (strength: 0.8)
Output
The node provides two outputs:
- lora_info (STRING): A formatted text report with detailed information about all found LoRAs
- processed_prompt (STRING): The original prompt text (can be modified in future versions)
Features in Development
- Image Gallery: Full implementation of hover-to-view example images
- Interactive Controls: Click to copy trigger words, adjust strengths
- Filtering Options: Filter by base model, content level, etc.
- Export Options: Export LoRA information in various formats
Testing
Run the test suite to verify functionality:
./run_tests.sh
File Structure
lora-visualizer/
โโโ __init__.py # Node registration
โโโ README.md # This file
โโโ nodes/
โ โโโ lora_visualizer_node.py # Main node implementation
โโโ web/
โ โโโ lora_visualizer.js # Frontend visualization
โโโ tests/
โโโ test_lora_parsing.py # Unit tests
Contributing
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Ensure all tests pass
- Submit a pull request
Publishing to ComfyUI Registry
This node can be published to the ComfyUI Registry for easy installation by users.
Setup for Publishing
- Create a Publisher Account: Go to Comfy Registry and create a publisher account
- Get Your Publisher ID: Find your publisher ID (after the
@
symbol) on your profile page - Update pyproject.toml: Add your Publisher ID to the
PublisherId
field inpyproject.toml
- Create API Key: Generate an API key for your publisher in the registry
- Set GitHub Secret: Add your API key as
REGISTRY_ACCESS_TOKEN
in your GitHub repository secrets (Settings โ Secrets and Variables โ Actions โ New Repository Secret)
Automated Release Workflow
The project uses conventional commits for automatic semantic versioning. The "Release and Publish" GitHub Action automatically determines the next version based on your commit messages:
Commit Message Format:
fix: description
โ patch version bump (1.0.0 โ 1.0.1)feat: description
โ minor version bump (1.0.0 โ 1.1.0)BREAKING CHANGE:
in commit body โ major version bump (1.0.0 โ 2.0.0)
Release Process:
- Make commits using conventional format
- Go to Actions โ "Release and Publish to ComfyUI Registry"
- Click "Run workflow"
- Add changelog (optional)
- Choose dry run to preview without releasing
This workflow automatically:
- โ Analyzes commit messages since last release
- โ Calculates appropriate version bump
- โ
Updates version in
pyproject.toml
- โ
Creates git tag (e.g.,
v1.1.0
) - โ Creates GitHub release with changelog
- โ Publishes to ComfyUI Registry
Example Commit Messages:
git commit -m "fix: resolve parsing issue with special characters"
git commit -m "feat: add support for custom LoRA tags"
git commit -m "feat: new visualization mode
BREAKING CHANGE: removes old API methods"
Manual Publishing
For quick republishing without version changes:
- Go to Actions โ "Release and Publish to ComfyUI Registry"
- Click "Run workflow"
- Select "publish_only" from the action type dropdown
- Click "Run workflow"
Alternatively, use the ComfyUI CLI: comfy node publish
For more details, see the ComfyUI Registry Publishing Guide.
๐ Debugging and Troubleshooting
Enable Debug Logging
By default, the LoRA Tools Suite shows only high-level progress messages. For detailed debugging information (similarity scores, boost calculations, metadata processing), enable debug logging:
Windows (Command Prompt):
set COMFYUI_LORA_DEBUG=1
Windows (PowerShell):
$env:COMFYUI_LORA_DEBUG = "1"
macOS/Linux:
export COMFYUI_LORA_DEBUG=1
Then start ComfyUI. You'll see detailed debug messages like:
[ComfyUI-Lora-Visualizer] Finding relevant image LoRAs...
[ComfyUI-Lora-Visualizer] DEBUG: ๐งน Cleaned scene for embedding: 'cyberpunk woman in neon-lit alley'
[ComfyUI-Lora-Visualizer] DEBUG: ๐ DetailAmplifier similarity: 0.7243
[ComfyUI-Lora-Visualizer] DEBUG: ๐ DetailAmplifier content boost applied: 0.7243 โ 0.8692
[ComfyUI-Lora-Visualizer] Found 3 image LoRAs: ['DetailAmplifier', 'CyberPunkAI', 'NeonStyle']
Debug logging helps troubleshoot:
- LoRA discovery and similarity matching
- Keyword boost calculations
- Prompt composition steps
- Metadata processing details
To disable: Remove the environment variable or set it to an empty value.
Development
Prerequisites
- PDM for dependency management
- Python 3.8+ (same as ComfyUI requirement)
Setup Development Environment
-
Clone the repository:
git clone https://github.com/oliverswitzer/ComfyUI-Lora-Visualizer.git cd ComfyUI-Lora-Visualizer
-
Install development dependencies:
pdm install
This creates a virtual environment and installs pytest, black, and pylint.
Running Tests
pdm run test
Code Quality
Format code with Black:
pdm run format
Lint with Pylint:
pdm run lint
Run all checks (format + lint + test):
pdm run check
Test Structure
tests/test_lora_parsing.py
: Main test suitetests/fixtures/
: Sample metadata files for testingconftest.py
: Test configuration and ComfyUI mocking
Tests cover:
- LoRA tag parsing (standard and WanLoRA formats)
- Metadata extraction and processing
- Civitai URL generation
- Edge cases and error handling
Available PDM Scripts
| Command | Description |
|---------|-------------|
| pdm run format
| Format code with Black |
| pdm run lint
| Lint code with Pylint |
| pdm run check
| Run format + lint (tests via ./run_tests.sh) |
Note: For tests, use ./run_tests.sh
due to import complexities with ComfyUI's module structure.
Adding New Tests
- Add test methods to
TestLoRAVisualizerNode
class - Use fixture files in
tests/fixtures/
for realistic data - Mock ComfyUI dependencies (already set up in
conftest.py
) - Run tests to ensure everything passes
Example test:
def test_new_feature(self):
"""Test description."""
# Setup
test_data = {...}
# Execute
result = self.node.some_method(test_data)
# Assert
self.assertEqual(result, expected_value)
License
MIT License - see LICENSE file for details.