FluffyRock: e75-vpred-e48: fluffyrock_e75VpredE48.yaml

SD 1.5 Standard ConfigFluffyRockFluffyRockDownload



[ After a week of trying to create and upload a new checkpoint, Civitai finally randomly worked long though to complete the upload. yay. There seems to be something bugged with this model post as there have been constant issues for 6 months on adding newer checkpoints. Civitai says they are looking in to it. ]

[ e233-terminal-snr-vpred-e206 is the last of the original vpred training line. I've put it here for "completeness". There are a handful of newer checkpoints of this vpred model with some different (I forget what was changed in the training). I will try to upload e257-terminal-snr-vpred-e11 later if Civitai with corporate. ]

[ There are some newer FR models, specifically the "minsnr" lines, but they are somewhat "deepfried" and I wouldn't recommend them for general use, Lodestone has suggested using it for merging instead. As always, all of these are in the HF repo if you want to try them. ]

Official-enough Civitai upload of some of the common/popular/newer FluffyRock models. This is mostly being done to allow other posts and models to be able to properly reference the original models.

FluffyRock is a furry focused model with a very wide understanding of concepts and styles with the ability to sample at up to 1088x1088. There are multiple model branches being trained in parallel due to the many different experiments being conducted, each branch will produce outputs that are at least a little different from other branches.

There are multiple different model branches using different methods.

A chart of the branches and what is different between them will be added once it has been updated.

The info here is incomplete. This will be improved eventually.

Currently recommended version:

Personally, the vpred model line is getting really good. Does require additional setup to make work, see below.

Any recent epoch of terminal-snr are quite mature by this point and I'm not seeing a whole lot of change between each checkpoint beyond gradually improved concept understanding of lower volume tags.

This is often a subjective preference, use whichever you like best. Or mix with other models. Do whatever you want. :V


Use e621 tags, without underscores, comma separated, any order.

Artist tags use "by name" format without "(artist)" on the tags that normally have them.

Pre-3m models do not understand meta tags. Post-3m may understand meta tags, I have not explicitly tested yet.

Base SD1.5 natural language understanding has been mostly lobotomized. Several projects are currently running to recreate natural language understanding similar to base SD but more specific to furry art. Those checkpoints are too undercooked so far for general use, but you can find them in the Discord thread and on HF for testing.

Most examples shown here have minimal to no neg prompting.

Note that additional setup is required to use any of the FluffyRock vpred models:

Use the provided config file.

You will need to use cfg rescale.

For A1111 (and possibly the Vlad fork), use CFG_Rescale_webui extension. Or pull the cfg rescale PR from A1111 (unless it has since been merged upstream). Hopefully in the future this becomes a stock feature in A1111.

There is a method to to do this on Comfy UI, but I need to go verify that and add that info here.

About the Civitai upload:

More versions will be added over time. Leave comment if you need a particular checkpoint uploaded here. Newer models will be uploaded here as I have the time to upload and make sample images. Original Hugging Face repo will always be most current.

I'm the one uploading these models here because of our small casual team, I had the most bandwidth to spare and time to maintain it. Lodestone Rock trained these models. Many others have also helped with various things.

Due to limitations on Civitai (version string length is rather short) and how the site works (downloads do not use the original uploaded filename), the filenames for the checkpoints are different from the originals on HuggingFace. I've attempted to keep them unique between the different training branches while still close enough to the original to be able to identify them. The full original filenames for each checkpoint are listed under the "About this model" in the side panel.

Quick breakdown on each model line here.

1088-megares: Trained on highres dataset up to 1088px.

Considered finished at e27 as it had plateaued and efforts moved to other lines.

1088-megares-offset-noise: Same as above, but with additional epochs with offset noise. Helps increase dynamic lighting range of dark and light parts of images, ie. darker darks can be possible.

Considered finished at e27 as it had plateaued and efforts moved to other lines.

1088-megares-offset-noise-3M: Same as above with larger >3 million image dataset. Able to understand more concepts.

I believe no additional checkpoints are being trained in favor of giving more time to other lines.

1088-megares-terminal-snr: Similar goal to offset noise, but technically different method. Rescales the noise schedule to enforce zero terminal SNR. This blends in to additional changes done in the vpred fork below.

1088-megares-terminal-snr-vpred: Forked from 1088-megares-terminal-snr at epoch 20-21.

This is an experimental model that uses v-prediction to fix Stable Diffusions 1.5 poor noise scheduling and sample steps. It does this in four different ways.

  • By rescaling the noise schedule to enforce zero terminal SNR.

  • Training the model with v-prediction

  • Changing the sampler to always start from the last timestep

  • Rescaling classifier-free guidance to prevent over-exposure (config rescale)

These modifications are based upon the paper "Common Diffusion Noise Schedules and Sample Steps are Flawed".

Experimentation with the model has showed a variety of possible improvements, including but not excluded to.

  • Improved understanding of prompts

  • More accurate colours

  • Significantly enhanced contrast

Note that additional setup is required to use any of the FluffyRock vpred models:

Requires config file and cfg rescale. For A1111 (and possibly the Vlad fork), use CFG_Rescale_webui extension or pull the cfg rescale PR from A1111 (unless it has since been merged upstream).

e6laion: Another experiment.

Is not a fork and separate from all other lines.

Trained on a dataset of e6, laion, and booru. It is relearning things that base SD1.5 had. Also uses vpred. Much experimental and does not have many epochs yet. Is not uploaded here yet. Can be download from the HuggingFace repo. Results can be unstable.

PolyFur: Newer project, somewhat similar to e6laion but with the additional dataset being human curated and has a similar goal to reintroduce natural language prompting but with a focus on improved aesthetic.

Is not a fork and separate from all other lines.

Is showing improvements each epoch and might get a release here around early August. Also uses vpred. Can be downloaded now from the HuggingFace Repo.

SDXLVAE: An experimental fork of 1088-megares-offset-noise-3M that uses the SDXL VAE.


Tag Autocomplete File - this is currently only covering the pre-3M dataset. I am working on building a new one, but there's 35k conflicting tags I have to manually verify and correct.

Two Epoch Numbers?

First number is continuous from the start of training.

Second number is from when that specific line was forked.

Example: fluffyrock-576-704-832-960-1088-lion-low-lr-e101-terminal-snr-vpred-e74

101st checkpoint from the start of the 1088 multires.This is the total epochs.

74th checkpoint since terminal-snr was forked and the number of epochs done on tsnr. (vpred was most likely forked off at e20-e21.)


Output looks bad:

Do not sample at 512x512. Use 768 or greater. Going past 1088 may result in the typical SD1.x highres anomalies. High-res-fix and other similar methods work well to easily achive 2k+ resolutions.

Prompt some art styles. Use some "by [e6 artist tag without underscores]". For better results, prompt several. Use of A1111's prompt-editing feature works really well for creating unique styles.

The concept of some tags, while known to the known, had either too few samples or too much of samples had heavy bias. Training a custom LoRA for the concept is usually a good way to get a concept to preference.

VPred troubleshooting:

Output just noise/cloud: Missing config file.

Output too dark: Turn up cfg rescale. Usually 0.7-0.9 seems to work best.

Some samplers may not work as cfg rescale support is not complete as of yet. See discord thread for latest discussion on this.

Training LoRAs:

Previously, e27 was more recommended as the model to train against as the results would be more portable to the other FR model branches at the time. This is outdated.

LoRA's trained on any recent line of FR have had decent portablity between other model lines in my personal experience. But training against the model you plan to sample is going to likely have the best results.

Noise-offset models may require training with noise offset >0 to have good results, though these LoRAs may not work well on other models that do not use noise offset. Start with 0 and check results. The offset-noise models are pretty outdated now and you likely want to consider a newer model line.

Terminal-SNR (non-vpred) models require nothing special.

vpred requires training with v_parameterization enabled. kohya_ss will complain about using that on v1, ignore that, nobody expected people would train SD1.5 with v-prediction.

LoRAs trained on non-vpred FR models will likely work.

Ask in the Discord for help.

Links and resources:

Tag Autocomplete File

Hugging Face Repo Contains every version of every model line. Full git clone of the repo will require >1.5TB disk space, you have been warned.

FluffyRock Discord Server

Furry Diffusion Discord Server and the FR thread there

LodestoneRock's Patreon Help support them with the cost of training.

License: WTFPL


Due to Civitai's on-site gen being broken (for at least these models), I've had to set the commercial use to an incorrect value to disable the annoying "Create" button. You can use the models on gen service, we don't care, but it would be cool if it actually works. :V

Apparently it works now except with vpred models.