ComfyUI Extension: Customizable API Call Nodes by BillBum

Authored by AhBumm

Created

Updated

4 stars

API call node for Third-party platforms both official and local. Support VLMs LLMs Dalle3 Flux-Pro SD3 etc. And some little tools: img to b64 url, b64 url to img, b64 url to b64 data, reg text to word and ',' only, etc.

Custom Nodes (0)

    README

    Introduction

    BillBum Modified Comfyui Nodes is a set of nodes for myself to use api in comfyui. including DALL-E, OpenAI's LLMs, other LLMs api platform, also other image generation api. screenshot

    Features

    • Text Generation: Use API to ask llm text generation, structured responses (not work yep).
    • Image Generation: Use API to Generate images, support dalle and flux1.1-pro etc.
    • Vision LM: Use API to caption or describe image, need models vision supported.
    • little tools: base64 url to base64 data, base64 url to IMAGE, IMAGE to base64 url, regular llm text to word and "," only. etc.

    Update

    • Add use_jailbreak option for VisionLM api node If your caption task rejected due to nsfw content, you can try to take on use_jailbreak. tested models:
      • llama-3.2-11b
      • llama-3.2-90b
      • gemini-1.5-flash
      • gemini-1.5-pro
      • pixtral-12b-latest
    • Add Image API Call Node Theoretically you can request and test any t2i model api in this node image

    Installation

    In ComfyUI Manager Menu "Install via Git URL"

    https://github.com/AhBumm/ComfyUI_BillBum_Nodes.git
    

    Or search "billbum" in ComfyUI Manager image

    install requirements with comfyui embeded python

    pip install -r requirements.txt