Custom AI prompt generator node for ComfyUI.
Custom AI prompt generator node for ComfyUI. With this node, you can use text generation models to generate prompts. Before using, text generation model has to be trained with prompt dataset or you can use the pretrained models.
git clone https://github.com/alpertunga-bile/prompt-generator-comfyui.git
command under custom_nodes
folder.ComfyUI_windows_portable
folder and run the run_nvidia_gpu.bat filehires.fixWithPromptGenerator.json
or basicWorkflowWithPromptGenerator.json
workflowmodels/prompt_generators
folder. You can create your prompt generator with this repository. You have to put generator as folder. Do not just put pytorch_model.bin
file for example.Refresh
button in ComfyUIgit clone https://github.com/alpertunga-bile/prompt-generator-comfyui.git
command under custom_nodes
folder.hires.fixWithPromptGenerator.json
or basicWorkflowWithPromptGenerator.json
workflowmodels/prompt_generators
folder. You can create your prompt generator with this repository. You have to put generator as folder. Do not just put pytorch_model.bin
file for example.Refresh
button in ComfyUIhires.fixWithPromptGenerator.json
or basicWorkflowWithPromptGenerator.json
workflowmodels/prompt_generators
folder. You can create your prompt generator with this repository. You have to put generator as folder. Do not just put pytorch_model.bin
file for example.Refresh
button in ComfyUIgenerated_prompts
folder with date as filename.You can find the models in this link
For to use the pretrained model follow these steps:
models/prompt_generators
folder.Refresh
button in ComfyUI.model_name
variable (If you can't see the generator, restart the ComfyUI).The model versions are used to differentiate models rather than showing which one is better.
The v2 version is the latest trained model and the v4 and v5 models are experimental models.
female_positive_generator_v2 | (Training In Process)
female_positive_generator_v3 | (Training In Process)
female_positive_generator_v4 | Experimental
| Variable Names | Definitions |
| :-----------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| model_name | Folder name that contains the model |
| accelerate | Open optimizations. Some of the models are not supported by BetterTransformer (Check your model). If it is not supported switch this option to disable or convert your model to ONNX |
| quantize | Quantize the model. The quantize type is changed based on your OS and torch version. none
value disables the quantization. Check this section for more information |
| prompt | Input prompt for the generator |
| seed | Seed value for the model |
| lock | Lock the generation and select from the last generated prompts with index value |
| random_index | Random index value in [1, 5]. If the value is enable, the index value is not used |
| index | User specified index value for selecting prompt from the generated prompts. random_index variable must be disable |
| cfg | CFG is enabled by setting guidance_scale > 1. Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer quality |
| min_new_tokens | The minimum numbers of tokens to generate, ignoring the number of tokens in the prompt. |
| max_new_tokens | The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt. |
| do_sample | Whether or not to use sampling; use greedy decoding otherwise |
| early_stopping | Controls the stopping condition for beam-based methods, like beam-search |
| num_beams | Number of steps for each search path |
| num_beam_groups | Number of groups to divide num_beams into in order to ensure diversity among different groups of beams |
| diversity_penalty | This value is subtracted from a beam’s score if it generates a token same as any beam from other group at a particular time. Note that diversity_penalty is only effective if group beam search
is enabled. |
| temperature | How sensitive the algorithm is to selecting low probability options |
| top_k | The number of highest probability vocabulary tokens to keep for top-k-filtering |
| top_p | If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation |
| repetition_penalty | The parameter for repetition penalty. 1.0 means no penalty |
| no_repeat_ngram_size | The size of an n-gram that cannot occur more than once. (0=infinity) |
| remove_invalid_values | Whether to remove possible nan and inf outputs of the model to prevent the generation method to crash. Note that using remove_invalid_values can slow down generation. |
| self_recursive | See this section |
| recursive_level | See this section |
| preprocess_mode | See this section |
torch >= 2.4
and Bitsandbytes package works out-of-box with Linux OS. So the node is checking which package to use:
quantize
variable and it has only none
value.none, int8, float8, int4
values.none, int8, int4
values.For random generation:
You can find this text generation strategy from the upper link. The strategy is called Multinomial sampling.
Changing variable of do_sample to disable gives deterministic generation.
For more randomness, you can:
a,
as seed and recursive level is 1. I am going to use the same outputs for this example to explain the functionality more understandable.b
. So next seed is going to be b
and generator's output is c
. Final output is a, c
. It can be used for generating random outputs.b
. So next seed is going to be a, b
and generator's output is c
. Final output is a, b, c
. It can be used for more accurate prompts.(masterpiece), ((masterpiece))
is not allowed. Checking the pure keyword without parantheses and weights. The algorithm is adding the prompts from the beginning of the generated text, so add important prompts to prompt variable.(masterpiece), ((masterpiece))
is allowed but (masterpiece), (masterpiece)
is not. Checking the exact match of the prompt.# ---------------------------------------------------------------------- Original ---------------------------------------------------------------------- #
((masterpiece)), ((masterpiece:1.2)), (masterpiece), blahblah, blah, blah, ((blahblah)), (((((blah))))), ((same prompt)), same prompt, (masterpiece)
# ------------------------------------------------------------- Preprocess (Exact Keyword) ------------------------------------------------------------- #
((masterpiece)), blahblah, blah, ((same prompt))
# ------------------------------------------------------------- Preprocess (Exact Prompt) -------------------------------------------------------------- #
((masterpiece)), ((masterpiece:1.2)), (masterpiece), blahblah, blah, ((blahblah)), (((((blah))))), ((same prompt)), same prompt
bug
labelpip install --upgrade transformers optimum optimum[onnxruntime-gpu]
command.ComfyUI_windows_portable
folder..\python_embeded\python.exe -s -m pip install --upgrade transformers optimum optimum[onnxruntime-gpu]
command.ComfyUI_windows_portable
folder.python_embeded
folder if it is exists and is using it to install the required packages.Contributions are welcome. If you have an idea and want to implement it by yourself please follow these steps:
If you have an idea but don't know how to implement it, please create an issue with enhancement
label.
[x] The contributing can be done in several ways. You can contribute to code or to README file.