ComfyUI Extension: ComfyUI-ExLlama

Authored by Zuellni



102 stars

Nodes: ExLlama Loader, ExLlama Generator. Used to load 4-bit GPTQ Llama/2 models. You can find a lot of them over at a/[w/NOTE: You need to manually install a pip package that suits your system. For example. If your system is 'Python3.10 + Windows + CUDA 11.8' then you need to install 'exllama-0.0.17+cu118-cp310-cp310-win_amd64.whl'. Available package files are a/here]

Custom Nodes (0)


    ComfyUI ExLlamaV2 Nodes

    A simple local text generator for ComfyUI using ExLlamaV2.


    Clone the repository to custom_nodes and install the requirements:

    git clone custom_nodes/ComfyUI-ExLlamaV2-Nodes
    pip install -r custom_nodes/ComfyUI-ExLlamaV2-Nodes/requirements.txt

    Use wheels for ExLlamaV2 and FlashAttention on Windows:

    pip install exllamav2-X.X.X+cuXXX.torch2.X.X-cp3XX-cp3XX-win_amd64.whl
    pip install flash_attn-X.X.X+cuXXX.torch2.X.X-cp3XX-cp3XX-win_amd64.whl


    Only EXL2, 4-bit GPTQ and unquantized models are supported. You can find them on Hugging Face.

    To use a model with the nodes, you should clone its repository with git or manually download all the files and place them in models/llm. For example, if you want to download the 6-bit Llama-3-8B-Instruct, use the following command:

    git install lfs
    git clone -b 6.0bpw models/llm/Llama-3-8B-Instruct-exl2-6.0bpw

    [!TIP] You can add your own llm path to the extra_model_paths.yaml file and put the models there instead.


    <table> <tr> <td><b>Loader</b></td> <td colspan="2">Loads models from the <code>llm</code> directory.</td> </tr> <tr> <td></td> <td><i>cache_bits</i></td> <td>A lower value reduces VRAM usage, but also affects generation speed and quality.</td> </tr> <tr> <td></td> <td><i>fast_tensors</i></td> <td>Enabling reduces RAM usage and speeds up model loading.</td> </tr> <tr> <td></td> <td><i>flash_attention</i></td> <td>Enabling reduces VRAM usage, not supported on cards with compute capability below <code>8.0</code>.</td> </tr> <tr> <td></td> <td><i>max_seq_len</i></td> <td>Max context, higher value equals higher VRAM usage. <code>0</code> will default to model config.</td> </tr> <tr> <td><b>Generator</b></td> <td colspan="2">Generates text based on the given prompt. Refer to <a href="">SillyTavern</a> for sampler parameters.</td> </tr> <tr> <td></td> <td><i>unload</i></td> <td>Unloads the model after each generation to reduce VRAM usage.</td> </tr> <tr> <td></td> <td><i>stop_conditions</i></td> <td>List of strings to stop generation on, e.g. <code>["\n"]</code> to stop on newline. Leave empty to only stop on <code>eos</code> token.</td> </tr> <tr> <td></td> <td><i>max_tokens</i></td> <td>Max new tokens, <code>0</code> will use available context.</td> </tr> <tr> <td><b>Previewer</b></td> <td colspan="2">Displays generated text in the UI.</td> </tr> <tr> <td><b>Replacer</b></td> <td colspan="2">Replaces variable names in brackets, e.g. <code>[a]</code>, with their values.</td> </tr> </table>


    An example workflow is embedded in the image below and can be opened in ComfyUI.