Takes the difference in between the positive and negative conditioning at the attention. NOTE: Will not work with Flux
Takes the difference in between the positive and negative conditioning at the cross attention.
This is an experiment.
Only tested with SDXL and SD 1.X.
Will not work with Flux (see bottom note).
This allows to:
In order to do this for now the negative conditioning is sneaked to the attention by being concatenated to the positive by using a special node.
They are then split at the half before the cross attention.
Like any model patcher, it is to be plugged right after the model loader:
An example workflow is provided.
No modification:
Difference in positive and negative conditionings:
Difference in positive conditioning, negative conditioning empty:
No modification using empty negative conditioning:
I haven't managed to make this work with anything but SDXL / SD1.5
I did spend two hours looking for how to patch the equivalent of the cross attention for Flux but did not find how (like the keywords for the patch or something).
Any help appreciated!