The fastest way to deploy a ComfyUI workflow

Turn ComfyUI workflows into a production ready Serverless APIs in minute. No need to fiddle with docker images or network volumes, we take care of that for you. Works with any node and model.

1view_comfy_api_url = "<Your_ViewComfy_endpoint>" 2client_id = "<Your_ViewComfy_client_id>" 3client_secret = "<Your_ViewComfy_client_secret>" 4 5# Set parameters 6params["6-inputs-text"] = "A cat sorcerer" 7params["52-inputs-image"] = open("inputs/img.png", "rb") 8 9# Call the API and get the logs of the execution in real time 10prompt_result = await infer_with_logs( 11 api_url=view_comfy_api_url, 12 params=params, 13 logging_callback=logging_callback, 14 client_id=client_id, 15 client_secret=client_secret, 16) 17

Ultimately, it is as easy as dropping a workflow and picking the hardware you want to run it on.

Start from an existing workflow, or build your own

You can use an existing workflow or start from a fresh installation of ComfyUI. We take care of installing all the dependencies you will need.

Chose your hardware

Pick from our range of GPUs to deploy your ComfyUI API. From 16GB T4s to 180GB GPUs like the B200, we have what you need.

Serverless infrastructure out of the box

We've already done all the infrastructure optimisation so that you don't have to. More GPUs will automatically be added to your deployment as the load increases.

Update your API on the go

Update the workflow behind the API as you need. You can choose which version of the workflow you want to run when making an API call.

Select the parameters you want to expose

Keep it simple and only expose the parameters you need for your API.

Add new models when you need them

Add models from Hugging Face, CivitAI or your own storage solution using a download link.

Keep it simple with our simplified API

The ViewComfy API offers many simplifications over the standard ComfyUI API. With one simple request send the parameters and the workflow version, the API will return the workflow's output.