ComfyUI Extension: comfyui-Lora-Tag-Power-Loader

Authored by jonstreeter

Created

Updated

0 stars

A powerful ComfyUI custom node that combines text-based LoRA tag detection with advanced dual noise weight support, perfect for WanVideo/Hunyuan Video workflows.

Custom Nodes (0)

    README

    LoRA Tag Power Loader for ComfyUI

    A powerful ComfyUI custom node that combines text-based LoRA tag detection with advanced dual noise weight support, perfect for WanVideo/Hunyuan Video workflows.

    Features

    • Text-Based LoRA Loading - Embed LoRA tags directly in your prompts (like A1111/ImpactWildcardEncode)
    • Dual Noise Weighting - Different LoRA strengths for high noise vs low noise passes (perfect for WanVideo 2.2)
    • Unlimited LoRAs - Load as many LoRAs as you need from a single text input
    • Optional Inputs - Model and CLIP are optional (like Power Lora Loader)
    • Standard ComfyUI Weighting - Uses the same method as Power Lora Loader
    • Video Model Support - Built-in WanVideo/Hunyuan LoRA key standardization
    • Smart Matching - Flexible LoRA file name matching
    • Performance - LoRA caching for faster processing
    • Clean Output - Returns text with tags removed for use in subsequent nodes
    • Helpful Tooltips - All parameters have explanatory tooltips

    Installation

    1. Clone or download this repository into your ComfyUI/custom_nodes/ directory:
    cd ComfyUI/custom_nodes/
    git clone https://github.com/yourusername/comfyui-Lora-Tag-Power-Loader.git
    
    1. Restart ComfyUI

    2. The node will appear under loaders > LoRA Tag Power Loader

    Tag Format

    The node supports a clean, intuitive tag format with progressive complexity:

    Basic Format

    <lora:name:weight>
    

    Single weight applies to model, clip, and both noise levels.

    Example:

    <lora:style_lora:0.8>
    

    Dual Noise Format (Primary Use Case)

    <lora:name:high_noise:low_noise>
    

    Different weights for high and low noise passes. Clip strength follows high noise value.

    Examples:

    <lora:detail_enhancer:1.2:0.3>    # Strong at high noise, weak at low noise
    <lora:style_transfer:0.6:0.9>     # Weak at high noise, strong at low noise
    

    Advanced Format (Clip Override)

    <lora:name:high_noise:low_noise:clip>
    

    Full control over all parameters.

    Example:

    <lora:character:1.0:0.8:0.6>      # High=1.0, Low=0.8, Clip=0.6
    

    Node Parameters

    Required Inputs

    • text - Text containing LoRA tags (multiline supported). Tags will be parsed and removed from output.
    • noise_mode - Which noise weights to use:
      • high_noise - Use high noise weights
      • low_noise - Use low noise weights
      • auto - Average of high and low weights

    Optional Inputs

    • model (MODEL, optional) - The model to apply LoRAs to. Can be omitted if you only want to parse tags.
    • clip (CLIP, optional) - The CLIP model to apply LoRAs to. Can be omitted for model-only LoRA loading.
    • default_weight (Float, default: 1.0) - Default weight when not specified in tag. Uses standard ComfyUI weighting method (same as Power Lora Loader).
    • weight_multiplier (Float, default: 1.0) - Global multiplier for ALL LoRA weights. Set to 2.0 to double all weights, 0.5 to halve them. Applied after individual tag weights.
    • video_model_mode (Boolean, default: False) - Enable WanVideo/Hunyuan key standardization. Turn ON when using video models.

    Outputs

    • MODEL - Model with LoRAs applied
    • CLIP - CLIP with LoRAs applied
    • text - Input text with LoRA tags removed
    • lora_info - Formatted list of loaded LoRAs for debugging

    Usage Examples

    Example 1: Basic LoRA Loading

    Input text:
    A beautiful portrait <lora:photorealistic:0.8> with detailed lighting
    
    Output text:
    A beautiful portrait with detailed lighting
    
    LoRA info:
    ✓ 1. photorealistic [AUTO] M:0.80 C:0.80
    

    Example 2: Dual Noise for Video Generation

    Input text:
    Epic fantasy landscape <lora:cinematic:1.2:0.4> with <lora:detail_plus:0.8:1.0>
    
    Noise mode: high_noise
    
    Output text:
    Epic fantasy landscape with
    
    LoRA info:
    ✓ 1. cinematic [HIGH] M:1.20 C:1.20
    ✓ 2. detail_plus [HIGH] M:0.80 C:0.80
    

    Example 3: WanVideo 2.2 Workflow

    For WanVideo 2.2's dual LoRA loading pattern, you can use two instances of this node:

    High Noise Pass:

    Text: Beautiful portrait <lora:style:1.2:0.3> <lora:detail:0.9:0.5>
    Noise Mode: high_noise
    Video Model Mode: ON
    

    Low Noise Pass:

    Text: Beautiful portrait <lora:style:1.2:0.3> <lora:detail:0.9:0.5>
    Noise Mode: low_noise
    Video Model Mode: ON
    

    Example 4: Global Weight Multiplier

    Input text:
    Fantasy scene <lora:style:0.8> with <lora:lighting:0.6>
    
    Weight Multiplier: 2.0  (doubles all weights)
    
    LoRA info:
    ✓ 1. style [AUTO] M:1.60 C:1.60  (0.8 × 2.0)
    ✓ 2. lighting [AUTO] M:1.20 C:1.20  (0.6 × 2.0)
    

    Use cases for weight_multiplier:

    • Quickly test different overall LoRA strengths without editing tags
    • Fine-tune all LoRAs together in final iterations
    • Create variations with one parameter change
    • Batch process with different intensity levels

    Example 5: Multiple LoRAs with Mixed Formats

    Input text:
    A warrior <lora:character_base:0.9> wearing armor <lora:medieval:1.0:0.6:0.7>
    in a forest <lora:background:0.8> at sunset <lora:lighting:1.5:0.5>
    
    LoRA info:
    ✓ 1. character_base [AUTO] M:0.90 C:0.90
    ✓ 2. medieval [AUTO] M:0.80 C:0.70 (H:1.00 L:0.60)
    ✓ 3. background [AUTO] M:0.80 C:0.80
    ✓ 4. lighting [AUTO] M:1.00 C:1.50 (H:1.50 L:0.50)
    

    Use Cases

    Image Generation

    • Use single weight format for standard workflows: <lora:style:0.8>
    • Works exactly like traditional LoRA loaders

    Video Generation (WanVideo/Hunyuan)

    • Use dual noise format: <lora:style:1.2:0.4>
    • Toggle noise_mode between high_noise and low_noise passes
    • Enable video_model_mode for proper key standardization

    Workflow Flexibility

    • Embed LoRAs directly in your prompt text
    • No need for multiple LoRA loader nodes
    • Easy to copy/paste prompts between workflows
    • Clean text output for CLIP encoding

    Technical Details

    LoRA File Matching

    The node uses flexible name matching:

    1. Exact filename match
    2. Stem match (filename without extension)
    3. Prefix match
    4. Case-insensitive match

    This means <lora:my_style> will find:

    • my_style.safetensors
    • my_style_v2.safetensors
    • MY_STYLE.safetensors

    Caching

    LoRA files are cached in memory after first load for better performance when:

    • Processing multiple prompts
    • Reusing the same workflow
    • Batch processing

    Video Model Support

    When video_model_mode is enabled, the node attempts to import WanVideoWrapper's standardize_lora_key_format function to handle various LoRA formats:

    • AIToolkit/LyCORIS format
    • Diffusers format
    • Fun LoRA format
    • FineTrainer format

    If WanVideoWrapper is not installed, it falls back to standard LoRA loading.

    Model Architecture Compatibility

    This node is fully compatible with all major ComfyUI model architectures:

    | Model Type | Compatible | Text Encoder(s) | Notes | |-----------|-----------|----------------|-------| | Flux (Black Forest Labs) | ✅ YES | CLIP-L + T5-XXL | Full support, standard key mapping | | SDXL (Stable Diffusion XL) | ✅ YES | CLIP-L + CLIP-G | Dual CLIP encoders supported | | Wan/WanVideo (Alibaba) | ✅ YES | UMT5-XXL | Enable video_model_mode for best results | | Qwen (Qwen2-VL) | ✅ YES | Qwen25-7B-VLI | Vision-language model support | | HunyuanVideo | ✅ YES | Llama + T5 | Enable video_model_mode | | SD 1.5/2.1 | ✅ YES | CLIP-L | Standard support |

    Why it works: The node uses ComfyUI's universal load_lora_for_models() function, which automatically detects model architecture and maps LoRA keys correctly. This is the same method used by Power Lora Loader and all official ComfyUI loaders.

    Supported LoRA Formats:

    • Standard PyTorch state_dict (.safetensors, .pt)
    • Diffusers format (transformer.* keys)
    • LyCORIS/SimpleTuner format
    • OneTrainer format
    • AIToolkit format (for video models)
    • DoRA (Decomposed LoRA)

    Troubleshooting

    LoRA Not Found

    • Check that the LoRA file exists in your ComfyUI/models/loras/ folder
    • Try using the exact filename (without extension)
    • Check the console for detailed error messages

    Video Model LoRAs Not Working

    • Ensure video_model_mode is enabled
    • Install ComfyUI-WanVideoWrapper if working with WanVideo/Hunyuan models
    • Check console for key standardization messages

    Weights Not Applied Correctly

    • Verify tag format: <lora:name:weight> or <lora:name:high:low>
    • Check that weights are valid numbers
    • Look at lora_info output to see actual applied weights

    Comparison with Other Nodes

    | Feature | This Node | Power Lora Loader | ImpactWildcardEncode | Standard LoRA Loader | |---------|-----------|-------------------|----------------------|---------------------| | Text-based tags | ✅ | ❌ | ✅ | ❌ | | Unlimited LoRAs | ✅ | ✅ | ✅ | ❌ | | Dual noise weights | ✅ | ❌ | ❌ | ❌ | | Video model support | ✅ | ❌ | ❌ | ❌ | | Clean text output | ✅ | ❌ | ✅ | ❌ | | Visual UI | ❌ | ✅ | ❌ | ❌ |

    Credits

    Inspired by:

    • rgthree-comfy - Power Lora Loader concept
    • ComfyUI-Impact-Pack - ImpactWildcardEncode tag parsing
    • ComfyUI-WanVideoWrapper - Video model LoRA support

    License

    MIT License - Feel free to use and modify!

    Support

    If you encounter issues or have suggestions:

    1. Check the console for detailed error messages
    2. Verify your tag format matches the examples
    3. Make sure LoRA files are in the correct directory
    4. Open an issue on GitHub with your workflow and error details