ComfyUI Extension: ComfyUI-DataSet

Authored by daxcay

Created

Updated

41 stars

Data research, preparation, and manipulation nodes for model trainers and artists.

Custom Nodes (0)

    README

    COMFY-UI (3)

    ComfyUI-DataSet

    image image

    Data research, preparation, and manipulation nodes for model trainers and artists. </br>

    Drag & drop image into your workspace for node layout

    COMFY-UI-DATASET

    </br>

    Updates

    [Nov 12th 2024]

    • Now ClaudeAIChat, GroqAIChat and OpenAIChat works on API KEYS Defined in System Environment.
    • Keys names are: OPENAI_API_KEY, GROQ_API_KEY, ANTHROPIC_API_KEY

    Installation

    Note: Please upgrade as this is a major update that renders all previous updates invalid.

    Using comfy-cli (https://github.com/yoland68/comfy-cli)

    • comfy node registry-install ComfyUI-DataSet
    • https://registry.comfy.org/publishers/daxcay/nodes/comfyui-dataset

    Manual Method

    • Go to your Comfyui > Custom Nodes folder path > Run CMD
    • Copy and Paste this command git clone https://github.com/daxcay/ComfyUI-DataSet.git
    • Then go inside ComfyUI-DataSet with cmd or open new.
    • type pip install -r requirements.txt to install the dependencies

    Automatic Method with Comfy Manager

    • Inside ComfyUI > Click Manager Button on Side.

    • Click Custom Nodes Manager and Search for DataSet and Install this node:

      image

    • Restart ComfyUI and it should be good to go

    Recommended Plugin

    • ComfyUI-JDCN (https://github.com/daxcay/ComfyUI-JDCN)
    </br>

    You can find DataSet under this category:

    image

    </br>

    DataSet_Visualizer

    The DataSet_Visualizer node is designed to visualize dataset captions. It generates graphs offering various perspectives on token analysis. The word cloud represents token frequency with different sized fonts. The network graph illustrates the relationships between tokens. The frequency graph provides an exact metric of how often each token appears in your captions.

    Screenshot 2024-07-05 195633

    Inputs

    • TextFileContents(STRING, required): the contents of the text file to be processed.
    • Seperator(['comma', 'colon', 'space', 'pipe'], required): the delimiter used to separate tokens in the text file.
    • WordCloudTop(INT, min: 1, max: 9999, required): the number of top tokens to be plotted in WordCloud.
    • NetworkGraphTop(INT, min: 1, max: 9999, required): the number of top tokens having the highest interconnections within the captions.
    • FrequencyGraphTop(INT, min: 1, max: 9999, required): the number of top tokens with highest frequency from highest to lowest.

    Outputs

    • GraphsPaths(STRING, list): the file paths of the generated visualizations. It includes paths for: WordCloud image, NetworkGraph image, FrequencyTable image
    • GraphsImages(IMAGE, list): the generated images for the visualizations which can be used with PreviewImage and SaveImage node.

    Example

    </br>

    DataSet_CopyFiles

    The DataSet_CopyFiles node provides a method to copy files from a source folder to a destination folder using different modes: BlindCopy and CopyByDestinationFiles.

    Screenshot 2024-07-05 195651

    Inputs

    • source_folder (STRING, default: "directory path", required): source folder path to the files.
    • destination_folder (STRING, default: "directory path", required): destination folder path for the files copied.
    • copy_mode (['BlindCopy', 'CopyByDestinationFiles'], required):
      • BlindCopy: copies all files from source to the destination folder.
      • CopyByDestinationFiles: copies files from source folder to the destination only if there is a matching file (based on the base name) already present in the destination.
    </br>

    DataSet_TriggerWords

    The DataSet_TriggerWords node is designed to extract triggerwords from captions. The node identifies triggerwords as tokens containing BOTH letters and numbers.

    Screenshot 2024-07-05 195619

    Inputs

    • TextFileContents (STRING, required): the contents of the text file(s) to be processed.
    • search(['trigger_word_only', 'trigger_word_phrase'], required):
      • 'trigger_word_only': extracts individual triggerwords only
      • 'trigger_word_phrase': extracts entire phrase (contained within two comma's) which contains a triggerword

    Outputs

    • Words (STRING, list): the extracted triggerwords or triggerword-containing phrase
    </br>

    DataSet_TextFilesLoadFromList

    The DataSet_TextFilesLoad node is designed to process the basic attributes of txt files. It can for instance extract filenames or filenames WITHOUT extensions, file-paths and file-contents. Useful for certain batched workflows. It takes a file directory path as input

    Screenshot 2024-07-05 204333

    Inputs

    • TextFilePathsList(STRING, required): a list of file paths to the text files to be loaded. Only paths ending with .txt will be processed.

    Outputs

    • TextFileNames(STRING, list): the names of the text files.
    • TextFileNamesWithoutExtension(STRING, list): the names of the text files without their extensions.
    • TextFilePaths(STRING, list): the file paths of the text files.
    • TextFileContents(STRING, list): the contents of the text files.
    </br>

    DataSet_TextFilesLoad

    Same as above, but uses a widget path to file directory for input

    Screenshot 2024-07-05 204321

    Inputs

    • directory(STRING, required): the directory path where the text files are located. The path should be specified as a string.

    Outputs

    • TextFileNames(STRING, list): the names of the text files in the directory.
    • TextFileNamesWithoutExtension(STRING, list): the names of the text files without their extensions.
    • TextFilePaths(STRING, list): the file paths of the text files in the directory.
    • TextFileContents(STRING, list): the contents of the text files in the directory.
    </br>

    DataSet_TextFilesSave

    The DataSet_TextFilesSave node is designed to save text file contents to a specified directory. Supports the following modes: overwriting, merging, creating new files, and merging before saving new files DONT UNDERSTAND THIS.

    Screenshot 2024-07-05 195656

    Inputs

    • TextFileNames(STRING, required): the names of the text files to be saved.
    • TextFileContents(STRING, required): the contents of the text files to be saved.
    • destination(STRING, required): the directory path where the text files will be saved.
    • save_mode(['Overwrite', 'Merge', 'SaveNew', 'MergeAndSaveNew'], required): the mode of saving the files:
      • Overwrite: overwrites existing files with the same name.
      • Merge: appends content to existing files with the same name.
      • SaveNew: saves new files with a unique name if a file with the same name already exists.
      • MergeAndSaveNew: merges content with existing files and then saves as a new file with a unique name if a file with the same name already exists.
    </br>

    DataSet_FindAndReplace

    The DataSet_FindAndReplace node finds and replaces a text pattern within caption text files.

    Screenshot 2024-07-05 195639

    Inputs

    • TextFileContents(STRING, required): the text file contents to be processed
    • SearchFor(STRING, default: "search-text", required): the searched text pattern within the TextFileContents. Supports multiline input.
    • ReplaceWith(STRING, default: "replacement-text", required): the replacement text for the SearchFor pattern. Supports multiline input.

    Outputs

    • TextFileContents(STRING, list): the modified contents of the text files
    </br>

    DataSet_PathSelector

    The DataSet_PathSelector is useful for identifying images in a sub-dataset which are missing caption text files from a larger parent repository of image-text pairings. The node will search for orphaned text/image files in one directory which require the missing pair files with matching names from another directory.

    Screenshot 2024-07-05 204328

    Inputs

    • search_in_directory(STRING, required): the sub-dataset directory missing pairings
    • search_for_extensions(STRING, required): the extensions of the orphaned files, separated by commas (e.g., .txt, .csv).
    • select_from_directory(STRING, required): the repository directory containing the complete text-image pairings.
    • select_extensions(STRING, required): the extensions of the required files to be added, separated by commas (e.g., .txt, .csv).

    Outputs

    • SelectedNamesWithExtension(STRING, list): the names of the required files with their extensions.
    • SelectedNamesWithoutExtension(STRING, list): the names of the required files without their extensions.
    • SelectedPaths(STRING, list): the full paths of the required files.
    </br>

    DataSet_ConceptManager

    The DataSet_ConceptManager node is designed to add/remove tokens within caption files, and it will allow you to place these tokens at designated positions within the caption

    Screenshot 2024-07-05 195624

    Inputs

    • TextFileContents(STRING, required): the contents of the text file(s) to be processed.
    • Mode(STRING, required): the mode of operation: 'add' to add tokens or 'remove' to remove tokens.
    • Concepts(STRING, required): the concepts to add or remove, formatted as text + position (e.g., "tag1 0, tag2 2" for adding, "tag1, tag2" for removing).

    Outputs

    • TextFileContents(STRING, list): the modified contents of the caption file(s)
    </br>

    DataSet_OpenAIChat

    The DataSet_OpenAIChat uses the OpenAI GPT chat to help you generate prompts.

    Screenshot 2024-07-05 195559

    Inputs

    • model(STRING, required): select the OpenAI model. Options include "GPTo", "gpt-3.5-turbo", etc.
    • api_url(STRING, default: "https://api.openai.com/v1"): the base URL for the API.
    • api_key(STRING, required): the API key for authentication.
    • prompt(STRING, default: ""): the query chat. Prompt GPT to generate prompts
    • token_length(INT, default: 1024): the maximum number of tokens (words).

    Outputs

    • STRING: the GPT generated new prompt
    </br>

    DataSet_LoadImage

    The DataSet_LoadImage node provides essential image file attributes for captioning with the DataSet_OpenAIChat node. It leverages Pillow and Numpy libraries.

    Screenshot 2024-07-05 204341

    Inputs

    • image (STRING, required): the name of the image file to load from the input directory.

    Outputs

    • IMAGE: the image file.
    • MASK: the mask associated with the image.
    • STRING: the name of the image file.
    • STRING: the name of the image file without extension.
    • STRING: the full path of the image file.
    • STRING: the directory path of the image file.
    </br>

    DataSet_SaveImage

    The DataSet_SaveImage node batch saves images to a specified directory with optional PNG metadata. Also uses Pillow and Numpy.

    Screenshot 2024-07-05 195646

    Inputs

    • Images(IMAGE, required): list of images to save.
    • ImageFilePrefix(STRING, default: "Image"): prefix for the saved image filenames.
    • destination(STRING): directory path where images will be saved.
    </br>

    DataSet_OpenAIChatImage

    The DataSet_OpenAIChat uses the OpenAI GPTo multi-modal vision API in a chat framework, in order to caption images.

    Screenshot 2024-07-05 195607

    Inputs

    • image(IMAGE, required): image to be processed.
    • image_detail(STRING, default: "high": detail level of the image ("low" or "high").
    • prompt(STRING, default: ""): text prompt for the AI model.
    • model(STRING, default: "gpt-4o"): select the OpenAI model. Options include "GPTo", "gpt-3.5-turbo", etc.
    • api_url(STRING, default: "https://api.openai.com/v1"): OpenAI API endpoint URL.
    • api_key(STRING): the API key for authentication.
    • token_length(INT, default: 1024): maximum token length for the generated response.

    Outputs

    • STRING: generated captions
    </br>

    DataSet_OpenAIChatImageBatch

    The DataSet_OpenAIChatImageBatch class extends the functionality of DataSet_OpenAIChatImage to process batches of images with OpenAI's chat API for generating text catpions.

    Screenshot 2024-07-05 195615

    Inputs

    • images(IMAGE, required): list of images to be processed.
    • image_detail(STRING, default: "high"): detail level of the images ("low" or "high").
    • prompt(STRING, default: ""): text prompt for the AI model.
    • model(STRING, default: "gpt-4o"): select the OpenAI model. Options include "GPTo", "gpt-3.5-turbo", etc.
    • api_url(STRING, default: "https://api.openai.com/v1"): OpenAI API endpoint URL.
    • api_key(STRING): the API key for authentication.
    • token_length(INT, default: 1024): maximum token length for the generated response.

    Outputs

    • STRING: list of generated captions
    </br>

    Credits

    Raf Stahelin - Testing and Feedback

    Daxton Caylor - ComfyUI Node Developer

    • Contact

      • Twitter: @daxcay27
      • Email - [email protected]
      • Discord - daxtoncaylor
      • DiscordServer: https://discord.gg/UyGkJycvyW
    • Support

      • Buy me a coffee: https://buymeacoffee.com/daxtoncaylor

      • Support me on paypal: https://paypal.me/daxtoncaylor