ComfyUI is a node-based GUI for Stable Diffusion, allowing users to construct image generation workflows by connecting different blocks (nodes) together. It simplifies the creation of custom workflows by breaking them down into rearrangeable elements, such as loading a checkpoint model, entering prompts, and specifying samplers. ComfyUI is known for its flexibility, lightweight nature, and transparency in data flow, making it ideal for prototyping and sharing reproducible workflows.
It offers features like ComfyUI Manager for managing custom nodes, Impact Pack for additional nodes, and various functionalities like text-to-image, image-to-image workflows, and SDXL workflow. ComfyUI is a powerful tool for designing and executing advanced stable diffusion pipelines with a flowchart-based interface, supporting SD1.5, SD2, SDXL, and various models like Stable Video Diffusion, AnimateDiff, ControlNet, IPAdapters and more. It is a versatile tool that can run locally on computers or on GPUs in the cloud, providing users with the ability to experiment and create complex workflows without the need for coding.
ComfyICU is a cloud-based platform designed to run ComfyUI workflows. It aims to enhance the user experience by providing a user-friendly and cost-efficient environment.
With ComfyICU, you are billed based on the actual runtime of your workflow. No more worrying about extravagant costs for GPUs you're not using, or fear of sky-high bills if you forget to turn them off. We've got you covered with our top-notch GPUs, delivering fast results without you having to break the bank. We've taken the stress out of setting up the ComfyUI environment. Our platform comes pre-loaded with popular models and nodes, so you can dive right into your projects without the usual setup hassle and the dependency issues nightmare.
Plus, ComfyICU offers ready-to-use ComfyUI creative workflows. No need for downloads or setups - we're all about making things easy for you. And the best part? Every run of your workflow is automatically saved and version controlled. No more searching for that one perfect workflow - you've got a history of all your successful runs right at your fingertips.
With ComfyICU, running ComfyUI workflows is fast, convenient, and cost-effective. Welcome aboard!
ComfyUI and Automatic1111 are both user interfaces for creating artwork based on stable diffusion, but they differ in several key aspects:
Flexibility and Control: ComfyUI offers more flexibility in image generation, allowing users to create complex and intricate images with greater control, while Automatic 1111 focuses more on ease of use and simplicity.
Ease of Use: Automatic 1111 is designed to be user-friendly with a simple interface and extensive documentation, while ComfyUI has a steeper learning curve, requiring more technical knowledge and experience with machine learning
Performance: ComfyUI offers faster performance compared to Automatic1111, handling VRAM and RAM more efficiently to avoid performance issues like "CUDA out of memory" errors which is common in Automatic1111.
Workflows uploaded to your Comfy.ICU account are public by default and can be accessed via the workflows tab at https://comfy.icu/workflows. However, they only appear on the "Explore" tab if they are trending. You can change this by going to Settings and chaning your workflow's visibility to private.
Yes, we offer the option of private workflows, but this feature is only available for paid users.
Yes, if you're looking to use ComfyICU as a ComfyUI backend API, please refer to our API documentation.
Custom nodes are additional code modules that extend the capabilities of ComfyUI. Only selected custom extensions are supported out of the box to ensure stability.
We currently do not support user installations of custom nodes due to the complexities in ComfyICU's pure serverless design. However, we can manually install these on your behalf, managing all dependencies and updates so you can focus on your work.
Please note that we prioritize custom extension requests from our paid users. However, it's important to remember that not all extensions are well-maintained and could pose security risks in a shared environment like ComfyICU. Also, adding rarely used extensions could affect system performance for all users.
Therefore, we aim to maintain a balance. Please submit your requests on our #model-node-request Discord channel.
ComfyICU currently does not have a built-in model upload feature, so you'll need to rely on external storage solutions like S3 or public model repositories.
The easiest approach is to store them in an S3 bucket and make the files publicly accessible. Once the lora is available at a public S3 URL, you can simply reference that URL directly within your workflow.
Another option is to upload the lora to a public model repository like Hugging Face or Civitai. However, keep in mind that for this approach to work, the model will need to be publicly available, as ComfyICU cannot access private or protected models.
It's important to note that some common file sharing methods, like Google Drive links, require additional authentication and are not compatible with ComfyICU and cannot be used to upload loras.
Due to the unique serverless environment of ComfyICU, we first need to test these extensions in sandboxes before deploying them to all our servers. This process may take some time, and we are unable to provide a specific timeline at present.
ComfyUI Manager is custom extension that allows users to easily install other custom extensions. Unfortunately, the ComfyUI Manager is not available on ComfyICU as we do not support user installations due to compatibility issues and the complexities it introduces in the pure serverless design of ComfyICU.
If you need custom nodes, please submit a request on our #model-node-request Discord channel.
We manually test extensions before adding them to ComfyICU, then freeze that version of the extension and their dependencies. This process is still being refined as it has proven to be hard to do at scale, across many custom nodes. We currently only maintain one version of custom nodes at a time.
You add custom models by clicking the "Models" tab in the workflow editor. Simply add the direct download link to the model file. ComfyICU will then download the model before executing the workflow.
When adding model links from CivitAI or HuggingFace, please ensure that the link directly leads to file download instead of a webpage.
We temporarily cache the downloaded models on disk, so if you make subsequent requests with the same model URL, the system will automatically use the cached version.
Custom models are cached temporarily on disk for faster subsequent requests. However, because ComfyICU shares a single GPU with multiple users, the model might not be pre-loaded from disk to memory on every run. This is why first runs usually take longer, while subsequent runs are much faster.
ComfyICU supports hundreds of models out of the box. A list of built-in models can be found here.
If you're comparing the runtime on ComfyICU with your local machine or other dedicated GPU clouds, you might notice that ComfyICU tends to take a bit longer. The run time on ComfyICU should be comparable to the first run on your local machine, not the subsequent runs.
On your local machine, the initial run often takes more time because the models are loaded from the disk into the memory. However, subsequent runs are quicker because these models are cached in the memory.
On the other hand, ComfyICU operates differently. It's designed to share a single GPU with multiple users simultaneously. Consequently, the memory must be cleared between runs, which may require the model to be loaded from the disk again, resulting in a slightly longer runtime.
This is a compromise for achieving greater cost efficiency, albeit at the expense of a minor reduction in speed. We do use smart load balancers in an attempt to run the same workflow on the same GPU to take advantage of the cache but this is not always guaranteed.
If latency is something critical for your application, please contact us, we can provision dedicated GPUs for your use case. However, this means you'll also be billed for idle time when the GPU isn't actively running a workflow.
The execution time of the same workflow can vary based on several factors. These include:
Please note that timing the green progress bars in the Comfy UI, is not an accurate measurement of performance as the updates are not real time and there's significant latency between the UI refresh and the actual execution. You can even close the tab, and come back to it half an hour later, it will still show as if the update is happening right now. We record the execution progress asynchronously, and restream it once the websocket connection is available again.
Sometimes, a run may stay in queue for a longer time due to resource allocation and GPU provisioning. All our instances are elastic and will scale to zero instances when there's no traffic.
Hence, the average queue time can vary depending on the traffic at the moment. However, please be assured that we strive to allocate GPUs promptly to all runs without deliberately prolonging wait times based on the plan.
However, it's worth noting that Standard plans are given a higher priority, while the Pro plan receives the utmost priority, resulting in significantly reduced queue times. As soon as GPUs become available, they are assigned to the next job in the queue. As the queue gets longer, we provision additional GPUs to meet the demand. It may take a few minutes for this to take effect as provisioning new GPUs and starting ComfyUI typically requires about 2 minutes.
If the cold start is a crucial factor for you, we’re open to building custom implementations for your specific use case.
The Pro plan offers the most concurrency per account. The standard plan does offer parallelism but it's not fixed and depends on GPU availability. The limitation of running many requests in parallel is caused by GPU coldstarts.
If you notice a significant increase in runtime, the first run where ComfyUI loads models to GPU memory is the best reference point. Subsequent runs are faster as the model is already in memory. Also, you might want to double-check the number of steps in those workflows as it significantly affects the runtime.
The reason you're experiencing this is due to ComfyUI's caching mechanism. Essentially, when you submit the same workflow for execution, ComfyUI recognizes it as a duplicate and returns an empty output since the exact workflow has already been processed before. This is why you may notice the runtime is just 2 seconds for such cases.
To fix this, you will need to at least change the seed number or the prompt to generate new outputs. This behavior is caused underlying ComyUI's implementation and not by ComfyICU. It might work sometime because sometimes the same request gets sent to a different GPU within our cloud.
It seems like you might be running into a CORS error because the ComfyICU API is meant to be used on the server side. When you make these requests from your client's browser, it could expose the ComfyICU API key. If you're working with Next.js, you can shift the ComfyICU API call to the /api/something.ts
file and then call this internal endpoint from your client-side JavaScript.
When you purchase a subscription, you are buying a time slice to utilize powerful GPUs such as T4, L4, A10, A100 and H100 for running ComfyUI workflows. Each subscription plan provides a different amount of GPU time per month.
To simplify cost calculations, each credit is valued at $0.0001, or 1/10,000th of a dollar.
The cost per second for GPUs are as below:
In other words, the L4 GPU costs 9 credits per second, which is equivalent to $0.0009 per second.
Any unused monthly GPU time or credits do not roll over to the next billing cycle.
This policy serves multiple purposes: it allows us to plan capacity effectively, ensures fair resource allocation among all subscribers, and offers flexibility in subscription timing. You can start your subscription at any point in the month without waiting for a specific date.
To maximize value, we recommend using your allocated credits within each monthly billing cycle. You can always check your current credit balance and reset date on your account page.
Your credits reset exactly one month from your subscription date, not on a fixed calendar date. For instance, if you subscribed on the 15th of a month, your credits will reset on the 15th of each subsequent month. Consequently, you can subscribe at any time without having to wait for the start of a new month.
Yes, you have full control over your subscription and can cancel it at any time. Here's how:
This redirects you to our Stripe billing portal where you can:
Note: After cancellation, service continues until the end of the current billing period. No refunds for partial months. You can reactivate anytime through your account.
We appreciate your understanding regarding our refund policy. Due to the high costs associated with provisioning GPU resources, we are unable to offer refunds on subscriptions. Our pricing model is based on committed GPU allocations to ensure consistent availability of resources for all our users.
When you sign up for our service, you agree to these terms, including waiving the right to refunds. However, we want to provide you with flexibility in managing your subscription:
We encourage you to take advantage of our free tier to evaluate our service before committing to a paid subscription. If you have any questions or concerns about your subscription, please don't hesitate to contact our support team. We're here to ensure you have the best possible experience with ComfyICU.
We totally understand where you're coming from. Being a small, bootstrapped startup, it's not feasible for us to offer simple prepaid accounts or pay-as-you-go plans due to the costs of running idle GPUs. It takes a good 6-7 minutes to get new GPUs and to start ComfyUI, which is a bit too long. In order to do this in a reasonable amount of time, we need to keep a few GPUs always running in the background, which is quite a capital intensive. Even if you, as a user, run a workflow for just 5 minutes, we still bear the cost of a full hour. The subscription model helps to cover these expenses and plan capacity to ensure a decent experience for everyone. Prepaid accounts or pay-as-you-go plans wouldn't quite solve these problem.
While we strive to maintain the stability of custom nodes on ComfyICU, there are instances when custom node developers introduce breaking changes that are beyond our control. This could be causing the errors in your workflow. In such cases, we recommend recreating the workflow with the updated nodes and reviewing the documentation of the custom nodes for any changes that might affect your workflow.
If you encounter any problems or have any questions about using ComfyICU, please don't hesitate to reach out to our support team on Discord. We're here to help you get the most out of our service and ensure that your experience with ComfyICU is as smooth and productive as possible.
ComfyICU provides a serverless GPU infrastructure for running ComfyUI workflows. We do not directly use specific nodes or models; our users do. Therefore, it is the responsibility of our users to comply with the licenses of the software they use on our platform. For further inquiries, please contact [email protected].
We're continuously working to improve ComfyICU and expand its capabilities. This includes exploring ways to support the installation of custom nodes and models, refining our handling of specific custom node versions, and investigating solutions to reduce cold start times. We appreciate your patience and support as we work on these enhancements.