Maintained by Eden.art, this is a very fast, well tuned trainer for SDXL and SD15
This trainer was developed by the Eden team, you can try our hosted version of the trainer in our app. It's a highly optimized trainer that can be used for both full finetuning and training LoRa modules on top of Stable Diffusion. It uses a single training script and loss module that works for both SDv15 and SDXL!
The outputs of this trainer are fully compatible with ComfyUI and AUTO111, see documentation here. A full guide on training can be found in our docs.
<p align="center"> <strong>Training images:</strong><br> <img src="assets/xander_training_images.jpg" alt="Image 1" style="width:80%;"/> </p> <p align="center"> <strong>Generated imgs with trained LoRa:</strong><br> <img src="assets/xander_generated_images.jpg" alt="Image 2" style="width:80%;"/> </p>/ComfyUI_workflows
OPENAI_API_KEY=your_key_string
Everything will work without this, but results will be better if you set this up, especially for 'face' and 'object' modes.Install all dependencies using
pip install -r requirements.txt
then you can simply run:
python main.py train_configs/training_args.json
to start a training job.
Adjust the arguments inside training_args.json
to setup a custom training job.
You can also run this through Replicate using cog (~docker image):
sudo curl -o /usr/local/bin/cog -L "https://github.com/replicate/cog/releases/latest/download/cog_$(uname -s)_$(uname -m)"
sudo chmod +x /usr/local/bin/cog
cog build
sh cog_test_train.sh
cog run /bin/bash
When running this trainer in native python, you can also perform full unet finetuning using something like (adjust to your needs)
python main.py train_configs/full_finetuning_example.json
Bugs:
Bigger improvements: