Take your custom ComfyUI workflows to production. Run ComfyUI workflows using our easy-to-use REST API. Focus on building next-gen AI experiences rather than on maintaining own GPU infrastructure.
Explore the full code on our GitHub repository: ComfyICU API Examples
First, you'll need to create an API key. You can do this from your account settings page. Once you have your key, add it to your environment variable like so:
export COMFYICU_API_KEY=XXX
Install node-fetch to make API requests
npm install node-fetch@2 --save
Next, start by creating a workflow on the ComfyICU website. Run a few experiments to make sure everything is working smoothly.
Once you're satisfied with the results, open the specific "run" and click on the "View API code" button. You'll need to copy the workflow_id
and prompt
for the next steps.
const workflow_id = "XXX"; const prompt = { // ComfyUI API JSON };
Now, let's run the workflow using the API.
const fetch = require("node-fetch"); async function runWorkflow(body) { const url = "https://comfy.icu/api/v1/workflows/" + body.workflow_id + "/runs"; const resp = await fetch(url, { headers: { accept: "application/json", "content-type": "application/json", authorization: "Bearer " + process.env.COMFYICU_API_KEY, }, body: JSON.stringify(body), method: "POST", }); return await resp.json(); } const run = await runWorkflow({ workflow_id, prompt }); console.log(run);
To check the status of your run, you can either use a webhook to listen for status updates or periodically poll the API.
Webhooks provide real-time updates about your runs. Specify an endpoint when you create a run, and ComfyICU will send HTTP POST requests to that URL when the run is started and completed.
// This webhook endpoint should accept POST requests const webhook = "https://your-public-web-server.com/api/comfyicu-webhook"; const run = await runWorkflow({ workflow_id, prompt, files, webhook }); console.log(run);
You need to have a server that can receive POST requests. Here's a simple example using ExpressJS:
const express = require("express"); const bodyParser = require("body-parser"); const app = express(); const port = 5000; // Middleware to parse JSON bodies app.use(bodyParser.json()); app.post("/api/comfyicu-webhook", (req, res) => { const data = req.body; console.log(`Received webhook update: ${JSON.stringify(data, null, 2)}`); // Process the webhook data as needed res.status(200).json({ status: "success" }); }); app.listen(port, () => { console.log(`Server running on port ${port}`); });
The request body is a run object in JSON format. This object has the same structure as the object returned by the get run status API below.
While webhooks are handy, but they're not strictly necessary to use ComfyICU. You can also poll the API periodically to check the status of a run over time as shown below:
async function getRunStatus(workflow_id, run_id) { const url = "https://comfy.icu/api/v1/workflows/" + workflow_id + "/runs/" + run_id; const resp = await fetch(url, { headers: { accept: "application/json", "content-type": "application/json", authorization: "Bearer " + process.env.COMFYICU_API_KEY, }, }); return await resp.json(); } const status = await getRunStatus(workflow_id, run.id); console.log(status);
When polling for status updates, you typically want to check periodically until the run is complete. Here's an example of how to do this:
async function pollRunStatus( workflow_id, run_id, maxAttempts = 30, delay = 10000 ) { for (let attempt = 0; attempt < maxAttempts; attempt++) { try { const status = await getRunStatus(workflow_id, run_id); console.log(`Attempt ${attempt + 1}: Run status is ${status.status}`); if (status.status === "COMPLETED" || status.status === "ERROR") { return status; } await new Promise((resolve) => setTimeout(resolve, delay)); } catch (error) { console.error(`Error during polling: ${error.message}`); throw error; } } throw new Error("Max polling attempts reached"); } async function main() { try { const finalStatus = await pollRunStatus(workflow_id, run.id); console.log(`Final status: ${JSON.stringify(finalStatus, null, 2)}`); } catch (error) { if (error.message === "Max polling attempts reached") { console.log(`Polling timed out: ${error.message}`); } else { console.log(`An error occurred while polling: ${error.message}`); } } } main();
You can use the files
field to specify the ComfyUI destination path as key and provide a direct link to download the file as value. Then update the prompt
with the same filename.
For example, if you want to load image1.jpg
on a Load Image node:
const files = { "/input/image1.jpg": "http://public-url-for-assets.com/image1.jpg", // direct link to download the image "/input/image2.jpg": "http://public-url-for-assets.com/image2.jpg", "/input/video.mp4": "http://public-url-for-assets.com/video.mp4", }; const prompt = { // ComfyUI API JSON // ... 45: { _meta: { title: "Load Image", }, inputs: { image: "image1.jpg", // specify the input filename upload: "image", }, class_type: "LoadImage", }, // ... }; const run = await runWorkflow({ workflow_id, prompt, files }); console.log(run);
files
parameter is optional. If you don't have any input files, you can simply omit it.
Please note, only files saved to /output/
directory are considered as output of a run. Temporary files produced by PreviewImage nodes will not be saved.
All files provided via files
will be downloaded to their respective path before the workflow execution begins.
The file downloads do not count against your GPU usage.
Plus, we cache these files to speed up subsequent executions.
Just like input files, you can provide custom models via the files
field. For example, if you want to use the Thickline lora from Civitai:
const files = { // ... "/models/loras/thickline_fp16.safetensors": "https://civitai.com/api/download/models/16368?type=Model&format=SafeTensor&size=full&fp=fp16", // direct link to download the lora // similarly, you can also provide checkpoints, embeddings, VAEs etc by providing right ComfyUI path "/models/checkpoints/custom_model.safetensors": "https://...", "/models/embeddings/custom_embedding.pt": "https://...", }; const prompt = { // ComfyUI API JSON // ... 43: { inputs: { clip: ["4", 1], model: ["4", 0], lora_name: "thickline_fp16.safetensors", // specify the lora filename strength_clip: 1, strength_model: 1, }, class_type: "LoraLoader", }, // ... }; const run = await runWorkflow({ workflow_id, prompt, files }); console.log(run);
Add accelerator
option in the request body to select from T4, L4, A10, A100_40GB, A100_80GB or H100 GPUs.
const accelerator = "A100_40GB"; const run = await runWorkflow({ workflow_id, prompt, files, accelerator }); console.log(run);
And there you have it! You're now ready to build and run your custom ComfyUI workflows using ComfyICU API. Happy coding!