Run ComfyUI
in the Cloud

Share, Run and Deploy ComfyUI workflows in the cloud.

No downloads or installs are required. Zero setups.

Pay only for active GPU usage, not idle time. Zero wastage.

5k credits for free. No credit card required

Only pay for what you use

Comfy.ICU only bills you for how long your code is running. You don't pay for expensive GPUs when you're not using them.

Forget about the on/off switch for your GPUs. No more fear of sky-high bills if you forget to turn them off.

Comparison of billable seconds

GPU Usage

Other services

Pay for both active and idle GPU usage

Comfy.ICU

Pay only for active GPU usage

Ready-to-Use ComfyUI Workflows

Endless creative workflows, ready-to-use. No downloads, no setups.

Video-to-Video

With Animatediff, IPAdapter, ControlNet

Text-to-Video

With Animatediff, Prompt travel

Image-to-Video

With Animatediff, Stable Video Diffusion (SVD)

FaceSwap

With ReActor, IPAdapter Face, FaceID v2 and InstantID

Upscaling

With Hire-fix, UltraSharp, SUPIR, CCSR and APISR.

Simple and Scalable ComfyUI API

Take your custom ComfyUI workflows to production.

Run ComfyUI workflows using our easy-to-use REST API. Focus on building next-gen AI experiences rather than on maintaining own GPU infrastructure.

Fast and lightweight

No containers. Only need your json workflow and models for deployment.

Forget about infrastructure

Deploying ComfyUI workflows can be hard.

Scalable as you grow

If you get a ton of traffic, Comfy.ICU scales up automatically.

Over 5000+ happy users!

Real testimonials from some of our users

"I burned through $50 in runpod credits messing with ComfyUI before and it just struck me as so inefficient that I was reserving these massive GPUs for hours at a time when I only needed them for like 1% of that, so thrilled to see this kind of thing gain some traction"

- werdnum

"This platform is really helping by the way, I have spent 2 weeks trying to get a serverless implementation of this workflow running! So thanks!"

- MrJames

Frequently Asked Questions

What is ComfyUI?

ComfyUI is a node-based GUI for Stable Diffusion, allowing users to construct image generation workflows by connecting different blocks (nodes) together. It simplifies the creation of custom workflows by breaking them down into rearrangeable elements, such as loading a checkpoint model, entering prompts, and specifying samplers. ComfyUI is known for its flexibility, lightweight nature, and transparency in data flow, making it ideal for prototyping and sharing reproducible workflows.

It offers features like ComfyUI Manager for managing custom nodes, Impact Pack for additional nodes, and various functionalities like text-to-image, image-to-image workflows, and SDXL workflow. ComfyUI is a powerful tool for designing and executing advanced stable diffusion pipelines with a flowchart-based interface, supporting SD1.5, SD2, SDXL, and various models like Stable Video Diffusion, AnimateDiff, ControlNet, IPAdapters and more. It is a versatile tool that can run locally on computers or on GPUs in the cloud, providing users with the ability to experiment and create complex workflows without the need for coding.

What is the difference between ComfyICU and ComfyUI?

ComfyICU is a cloud-based platform designed to run ComfyUI workflows. It aims to enhance the user experience by providing a user-friendly and cost-efficient environment.

With ComfyICU, you are billed based on the actual runtime of your workflow. No more worrying about extravagant costs for GPUs you're not using, or fear of sky-high bills if you forget to turn them off. We've got you covered with our top-notch GPUs, delivering fast results without you having to break the bank. We've taken the stress out of setting up the ComfyUI environment. Our platform comes pre-loaded with popular models and nodes, so you can dive right into your projects without the usual setup hassle and the dependency issues nightmare.

Plus, ComfyICU offers ready-to-use ComfyUI creative workflows. No need for downloads or setups - we're all about making things easy for you. And the best part? Every run of your workflow is automatically saved and version controlled. No more searching for that one perfect workflow - you've got a history of all your successful runs right at your fingertips.

With ComfyICU, running ComfyUI workflows is fast, convenient, and cost-effective. Welcome aboard!

How ComfyUI is different from Automatic1111 WebUI?

ComfyUI and Automatic1111 are both user interfaces for creating artwork based on stable diffusion, but they differ in several key aspects:

  1. Flexibility and Control: ComfyUI offers more flexibility in image generation, allowing users to create complex and intricate images with greater control, while Automatic 1111 focuses more on ease of use and simplicity.

  2. Ease of Use: Automatic 1111 is designed to be user-friendly with a simple interface and extensive documentation, while ComfyUI has a steeper learning curve, requiring more technical knowledge and experience with machine learning

  3. Performance: ComfyUI offers faster performance compared to Automatic1111, handling VRAM and RAM more efficiently to avoid performance issues like "CUDA out of memory" errors which is common in Automatic1111.

How can I install the ComfyUI Manager?

ComfyUI Manager is used behind the scenes to manage extensions. However, it is not supported for user installations due to compatibility issues and the unique challenges it presents. If you require custom nodes, please request for them to be added.

Can I install custom node packs in the workflows I import?

We don't support user installations of custom nodes due to the unique challenges they present and the pure serverless design of ComfyICU. However, our team can manually install them on your behalf.

Are the workflows I upload to my Comfy.ICU account public?

Workflows uploaded to your Comfy.ICU account are public by default and can be accessed via the workflows tab at https://comfy.icu/workflows. However, they only appear on the "Explore" tab if they are trending.

Can I have private workflows?

Yes, we offer the option of private workflows, but this feature is only available for paid users.

How can I add custom models?

Currently, we do not support the addition of custom models via the web UI. If you need a custom model, please raise a request in the #request-custom-model channel. We can manually add them to the website UI if they are available on Civitai.

Where can I find available public models?

A list of built-in public models can be found here.

How do custom workflow executions work with serverless?

ComfyICU caches downloaded models temporarily on disk. Subsequent requests with the same model URL will automatically use this cache. As for custom nodes, we do not perform any custom installation during inference. Only selected custom extensions are supported out of the box to ensure stability.

How do you handle custom workflows that require specific git branches, or specific package version configurations?

We manually test extensions before adding them to ComfyICU, then freeze that version of the extension and their dependencies. This process is still being refined as it has proven to be hard to do at scale, across many custom nodes. We currently only maintain one version of custom nodes at a time.

Can I use ComfyICU as a backend for my web app?

Yes, if you're looking to use ComfyICU as a ComfyUI backend API, please refer to our API documentation.

Why does the process sometimes stay in queue for a long time?

Sometimes, the process may stay in queue for a longer time due to resource allocation and GPU provisioning. All our instances are elastic and will scale to zero instances when there's no traffic.

Can I have a minimum of 1 worker to be highly available and ready to run?

If the cold start is a crucial factor for you, we’re open to building custom implementations for your specific use case.

Can I parallelize requests?

The Pro plan offers the most concurrency per account. The standard plan does offer parallelism but it's not fixed and depends on GPU availability. The limitation of running many requests in parallel is caused by GPU coldstarts.

Why is my workflow taking longer to execute than before?

If you notice a significant increase in runtime, the first run where ComfyUI loads models to GPU memory is the best reference point. Subsequent runs are faster as the model is already in memory. Also, you might want to double-check the number of steps in those workflows as it significantly affects the runtime.

Why is my workflow taking longer to execute on ComfyICU?

If you're comparing the runtime on ComfyICU with your local machine or other dedicated GPU clouds, you might notice that ComfyICU tends to take a bit longer. The run time on ComfyICU should be comparable to the first run on your local machine, not the subsequent runs.

On your local machine, the initial run often takes more time because the models are loaded from the disk into the memory. However, subsequent runs are quicker because these models are cached in the memory.

On the other hand, ComfyICU operates differently. It's designed to share a single GPU with multiple users simultaneously. Consequently, the memory must be cleared between runs, which may require the model to be loaded from the disk again, resulting in a slightly longer runtime.

This is a compromise for achieving greater cost efficiency, albeit at the expense of a minor reduction in speed. We do use smart load balancers in an attempt to run the same workflow on the same GPU to take advantage of the cache but this is not always guaranteed.

If latency is something critical for your application, please DM me, we can provision dedicated GPUs for you. However, this means you'll also be billed for idle time when the GPU isn't actively running a workflow.

How can I install custom nodes?

While ComfyICU doesn't support user installations of custom nodes due to the unique challenges they present, our team can manually install them on your behalf. We handle all the dependencies and updates, allowing you to focus on your creations. If you have specific custom nodes in mind, please let us know.

Does the custom model have to be downloaded for each run?

Custom models are cached temporarily on disk for faster subsequent requests. However, because ComfyICU shares a single GPU with multiple users, the model might not be pre-loaded from disk to memory on every run. This is why first runs usually take longer, while subsequent runs are much faster.

Why does the workflow execution time vary?

The execution time of workflows can vary based on several factors. These include the number of steps in the workflow, whether the model is already loaded into memory, and the current load on the GPU. If you notice that a workflow is taking longer than usual to execute, it could be due to increased demand on the GPU or because the model needs to be loaded into memory.

Where can I get help if I encounter problems or have questions?

If you encounter any problems or have any questions about using ComfyICU, please don't hesitate to reach out to our support team on Discord. We're here to help you get the most out of our service and ensure that your experience with ComfyICU is as smooth and productive as possible.

What future developments are planned for ComfyICU?

We're continuously working to improve ComfyICU and expand its capabilities. This includes exploring ways to support the installation of custom nodes and models, refining our handling of specific custom node versions, and investigating solutions to reduce cold start times. We appreciate your patience and support as we work on these enhancements.

Ready to get started?

Create a free account and start creating now