ComfyUI-Lumina-Next-SFT-DiffusersWrapper is a custom node for ComfyUI that integrates the advanced Lumina-Next-SFT model. It offers high-quality image generation with features like time-aware scaling, optional ODE sampling, and support for high-resolution outputs. This node brings the power of the Lumina text-to-image pipeline directly into ComfyUI workflows, allowing for flexible and powerful image generation capabilities.
This custom node seamlessly integrates the Lumina-Next-SFT model into ComfyUI, enabling high-quality image generation using the advanced Lumina text-to-image pipeline. While still under active development, it offers a robust and functional implementation with advanced features.
For manual installation:
Ensure you have ComfyUI installed and properly set up.
Clone this repository into your ComfyUI custom nodes directory:
git clone https://github.com/Excidos/ComfyUI-Lumina-Diffusers.git
The required dependencies will be automatically installed.
NOTE: This installation includes a development branch of diffusers, which may conflict with some existing nodes.
model_path
: Path to the Lumina model (default: "Alpha-VLLM/Lumina-Next-SFT-diffusers")prompt
: Text prompt for image generationnegative_prompt
: Negative text promptnum_inference_steps
: Number of denoising steps (default: 30)guidance_scale
: Classifier-free guidance scale (default: 4.0)seed
: Random seed for generation (-1 for random)batch_size
: Number of images to generate in one batch (default: 1)scaling_watershed
: Scaling watershed parameter (default: 0.3)proportional_attn
: Enable proportional attention (default: True)clean_caption
: Clean input captions (default: True)max_sequence_length
: Maximum sequence length for text input (default: 256)use_time_shift
: Enable time shift feature (default: False)t_shift
: Time shift factor (default: 4)strength
: Strength for image-to-image generation (default: 1.0, range: 0.0 to 1.0)latents
(optional): Input latents for image-to-image generationLATENT
: Latent representation of the generated image(s)If you encounter any issues, please check the console output for error messages. Common issues include:
For further assistance, please open an issue on the GitHub repository.
Contributions are welcome! Please feel free to submit a Pull Request.