ComfyICU API Documentation

Take your custom ComfyUI workflows to production. Run ComfyUI workflows using our easy-to-use REST API. Focus on building next-gen AI experiences rather than on maintaining own GPU infrastructure.

Explore the full code on our GitHub repository: ComfyICU API Examples

1

Create API Key

First, you'll need to create an API key. You can do this from your account settings page. Once you have your key, add it to your environment variable like so:

export COMFYICU_API_KEY=XXX

Install the requests library to make API requests:

pip install requests
2

Create a workflow

Next, start by creating a workflow on the ComfyICU website. Run a few experiments to make sure everything is working smoothly. Once you're satisfied with the results, open the specific "run" and click on the "View API code" button. You'll need to copy the workflow_id and prompt for the next steps.

workflow_id = "XXX"

prompt = {
    # ComfyUI API JSON
}
3

Run the workflow with API

Now, let's run the workflow using the API.

import os
import requests

def run_workflow(body):
    url = "https://comfy.icu/api/v1/workflows/" + body['workflow_id'] + "/runs"
    headers = {
        "accept": "application/json",
        "content-type": "application/json",
        "authorization": "Bearer " + os.environ['COMFYICU_API_KEY']
    }

    response = requests.post(url, headers=headers, json=body)
    return response.json()


run = run_workflow({"workflow_id": workflow_id, "prompt": prompt})
print(run)
4

Check the run status

To check the status of your run, you can either use a webhook to listen for status updates or periodically poll the API.

Webhooks

Webhooks provide real-time updates about your runs. Specify an endpoint when you create a run, and ComfyICU will send HTTP POST requests to that URL when the run is started and completed.

# This webhook endpoint should accept POST requests
webhook = "https://your-public-web-server.com/api/comfyicu-webhook"
run = run_workflow({"workflow_id": workflow_id, "prompt": prompt, "files": files, "webhook": webhook})
print(run)

You need to have a server that can receive POST requests. Here's a simple example using Flask:

from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route('/api/comfyicu-webhook', methods=['POST'])
def comfyicu_webhook():
    data = request.json
    print(f"Received webhook update: {json.dumps(data, indent=2)}")
    # Process the webhook data as needed
    return jsonify({"status": "success"}), 200

if __name__ == '__main__':
    app.run(port=5000)

The request body is a run object in JSON format. This object has the same structure as the object returned by the get run status API below.

Polling

While webhooks are handy, they're not strictly necessary to use ComfyICU. You can also poll the API periodically to check the status of a run over time as shown below:

def get_run_status(workflow_id, run_id):
    url = f"https://comfy.icu/api/v1/workflows/{workflow_id}/runs/{run_id}"
    headers = {
        "accept": "application/json",
        "content-type": "application/json",
        "authorization": "Bearer " + os.environ['COMFYICU_API_KEY']
    }
    response = requests.get(url, headers=headers)
    return response.json()

status = get_run_status(workflow_id, run['id'])
print(status)

When polling for status updates, you typically want to check periodically until the run is complete. Here's an example of how to do this:

import json
import time

def poll_run_status(workflow_id, run_id, max_attempts=30, delay=10):
    for attempt in range(max_attempts):
        status = get_run_status(workflow_id, run_id)
        print(f"Attempt {attempt + 1}: Run status is {status['status']}")

        if status['status'] in ['COMPLETED', 'ERROR']:
            return status

        time.sleep(delay)

    raise TimeoutError("Max polling attempts reached")

try:
    final_status = poll_run_status(workflow_id, run['id'])
    print(f"Final status: {json.dumps(final_status, indent=2)}")
except TimeoutError as e:
    print(f"Polling timed out: {e}")
except requests.exceptions.RequestException as e:
    print(f"An error occurred while polling: {e}")
5

Add input images and videos

You can use the files field to specify the ComfyUI destination path as key and provide a direct link to download the file as value. Then update the prompt with the same filename.

For example, if you want to load image1.jpg on a Load Image node:

files = {
    "/input/image1.jpg": "http://public-url-for-assets.com/image1.jpg",  # direct link to download the image
    "/input/image2.jpg": "http://public-url-for-assets.com/image2.jpg",
    "/input/video.mp4": "http://public-url-for-assets.com/video.mp4",
}

prompt = {
    # ComfyUI API JSON
    # ...
    "45": {
        "_meta": {
            "title": "Load Image",
        },
        "inputs": {
            "image": "image1.jpg",  # specify the input filename
            "upload": "image",
        },
        "class_type": "LoadImage",
    },
    # ...
}

run = run_workflow({"workflow_id": workflow_id, "prompt": prompt, "files": files})
print(run)

files parameter is optional. If you don't have any input files, you can simply omit it. Please note, only files saved to /output/ directory are considered as output of a run. Temporary files produced by PreviewImage nodes will not be saved.

All files provided via files will be downloaded to their respective path before the workflow execution begins. The file downloads do not count against your GPU usage. Plus, we cache these files to speed up subsequent executions.

6

Add custom models, loras and embeddings

Just like input files, you can provide custom models via the files field. For example, if you want to use the Thickline lora from Civitai:

files = {
    # ...
    "/models/loras/thickline_fp16.safetensors": "https://civitai.com/api/download/models/16368?type=Model&format=SafeTensor&size=full&fp=fp16",  # direct link to download the lora

    # similarly, you can also provide checkpoints, embeddings, VAEs etc by providing right ComfyUI path
    "/models/checkpoints/custom_model.safetensors": "https://...",
    "/models/embeddings/custom_embedding.pt": "https://...",
}

prompt = {
    # ComfyUI API JSON
    # ...
    43: {
        "inputs": {
            "clip": ["4", 1],
            "model": ["4", 0],
            "lora_name": "thickline_fp16.safetensors",  # specify the lora filename
            "strength_clip": 1,
            "strength_model": 1,
        },
        "class_type": "LoraLoader",
    },
    # ...
}

run = run_workflow({"workflow_id": workflow_id, "prompt": prompt, "files": files})
print(run)
7

Change the GPU accelerator

Add accelerator option in the request body to select from T4, L4, A10, A100_40GB, A100_80GB or H100 GPUs.

accelerator = "A100_40GB"
run = run_workflow({"workflow_id": workflow_id, "prompt": prompt, "files": files, "accelerator": accelerator})
print(run)

And there you have it! You're now ready to build and run your custom ComfyUI workflows using ComfyICU API. Happy coding!