This is a prompt to generate the word as the main function, but also reported the error node computing time dynamic display, the picture backward prompting, error assistant and translation and so on multifunctional nodes
The dialogue functionality of the current node is based on the Groq API (https://console.groq.com/keys), kimi API (https://platform.moonshot.cn/console/api-keys), and deepseek API (https://platform.deepseek.com/api_keys). Please obtain the necessary API keys from the respective websites and place them in the api_key.ini file. Through API calls, single-turn or multi-turn chats can be achieved to generate positive and negative prompts and error guides. Additionally, the node integrates image inversion nodes based on the moondream2 and PaliGemma large models.
<details> What the parameters presence_penalty and frequency_penalty do: `presence_penalty` and `frequency_penalty` are parameters used to control the diversity and repetition of the output of the language model. Let me explain what they do in more detail.presence_penalty(存在惩罚):
frequency_penalty(频率惩罚):
The main difference between these two parameters:
presence_penalty
Only cares if a token has appeared, regardless of how many times it has appeared.frequency_penalty
Then the number of occurrences of a token is taken into account, and the more occurrences, the greater the penalty.Example of use:
If you want the model to produce more diverse content, you can set higher positive values, for example:
presence_penalty=0.6, frequency_penalty=0.8
If you want the model to be more focused on a specific topic, you can use a lower value or a slightly negative value, for example:
presence_penalty=0, frequency_penalty=-0.2
For most general purposes, keeping these two values at or near 0 usually works well:
presence_penalty=0, frequency_penalty=0
Also in the groqchat.py node, temperature and top_p are two important parameters used to control the randomness and diversity of the language model output.
temperature(温度):
top_p(核采样):
Suggestions for the use of these two parameters:
For tasks that require a high degree of consistency and accuracy (e.g., quizzing or fact generation), use a lower temperature (e.g., 0.3-0.5) or a lower top_p value.
For creative writing or tasks requiring more variety, use a higher temperature (e.g., 0.7-1.0) or a top_p value close to 1.
Instead of tuning both parameters at the same time, one is usually chosen for tuning. temperature is more commonly used, while top_p may be more effective in some specific scenarios.
In practice, the optimal values of these parameters often need to be determined experimentally, as their effects may vary depending on the task and the type of output required.
In the groqchat.py node, these two parameters allow the user to tailor the characterization of the model output to specific needs, thus finding the right balance between consistency and creativity. And in practice, the optimal values of these parameters often need to be determined experimentally, as their effectiveness may vary depending on the task and the type of output required. For Groq's API, you may need to check its documentation to confirm that these parameters work exactly as described above, as different AI service providers may have subtle implementation differences.
<summary>Parameter details</summary> </details>2024-08-12
Added ollama related nodes. Some of the code references these authors below, and I would like to make thanks to https://github.com/wujm424606/ComfyUi-Ollama-YN and https://github.com/MinusZoneAI/ComfyUI-Prompt-MZ.
2024-08-04
Add gemma2 related node, please manually download the model to ComfyUI/models/LLavacheckpoints/gemma-2-2b-it, model address https://huggingface.co/google/gemma-2-2b-it/tree/main
2024-08-02
Added a feature to dynamically display node computation time, which can be turned off in the settings. (Inspired by ty0x2333's ComfyUI-Dev-Utils for time display code, thanks to the author)
2024-08-01
Added the deepseek chat node, which supports not only regular conversations but also includes five built-in roles: "Error Assistant," "Clickbait Generator," "Inspiration Helper," "Xiaohongshu Style," and "Information Extractor."
2024-07-29
A new personalized translation node based on the deepseek API has been added, which, combined with the Agentic workflow process of big brother Wu Enda, shows excellent translation standards. See the example below for specific usage:
Setting the country parameter to polish the translation results
2024-07-28
Added two new prompt direct-out nodes that can directly expand the input theme into a prompt word that matches SD or kolors.
A new cue word separation node has been added to separate positive and negative cues in the text based on regular matching
2024-07-27
In the groqchat node, a new reset conversation parameter has been added, the value of which is True to enable the single-round conversation function; a new prompt_extractor.py node has been added to separate the positive and negative prompt words in the text.
2024-07-24
2024-07-14
Added SD3LongCaptionerV2, a model loading and storing directory for the image inversion node. Considering the problem of large model, you can manually download the model from huggingface and put it into ComfyUl/models/LLavacheckpoints/files_for_sd3_long_captioner_v2 directory. Captioner_v2 directory, the model address https://huggingface.co/gokaygokay/sd3-long-captioner-v2
Once again updated file_based_chat node based on kimi API file conversation function, can realize the knowledge base function of the low-cost, upload file format support pdf, doc, xlsx, ppt, txt, pictures and so on. Specific application examples, you can upload the knowledge base in the "common error problems (documents can be obtained in the library, the source of K guy)", to solve some of the error problems. Newly updated support for kimi api, including "Moonshot Single Chat" and "Moonshot Multi Chat" nodes, which support single round or multiple rounds of conversation respectively.API application address https://platform.moonshot.cn/console/info A single round of dialog, as shown below: Multiple rounds of dialog:
Groq nodes are based on the groq cloud API.The following four models are currently supported.
The API was requested through https://console.groq.com/keys, which had not launched a paid plan as of the time of publication. However, there are corresponding restrictions, as shown below:
The use of nodes enables the generation of positive and negative cues in 2 seconds through specific cues. Picture with workflow
##Statement: The GroqChat node follows the MIT license agreement, and some of the functional code comes from other open source projects. Thanks to the original authors. For commercial use, please refer to the original project license agreement for authorization.Thanks also to Google's open source PaliGemma model.