Making extra long AI videos with Hunyuan Image to Video and RIFLEx

It feels like 2025 is finally going to be the year when AI will enter the world of Video. We already made a post a few weeks ago about two promising models (Wan 2.1 and SkyReels), but Tencent decided they were going to treat us with another best in class model for image to video. In this guide, we will go over how to use the native ComfyUI setup for Hunyuan Image2Video with RIFLEx. This addition allows us to generate videos that are longer than the usual 129 frames (5 seconds) the model was trained on.

Results comparison with and without RIFLEx

Before we get into the details of the workflow, these are the results we got generating a 193 frame video, with and without RIFLEx. (720p version)

As you can see, the RIFLEx helps prevent the video from going into a loop when going over the usual 129 frames recommended limit. It also makes it less likely that the model generates artefacts in longer videos. 


Quick start

If you are looking for a quick start with no installation, you can deploy the template we used in this tutorial here. We recommend using an H100 for optimal performance, as this is a heavy workflow. After two or three minutes, you should be able to access the workflow via the ComfyUI interface, or via API.

ComfyUI cloud interface

To open the ComfyUI interface, click on the ComfyUI link. Please note that the initial loading time may take a few minutes. Once the interface is ready, you drop this workflow to get started.


Key workflow parameters

In this section, we will go over the important parameters to get the most out of this workflow.

Load Image

This is the image the workflow will use to generate the video’s first frame.

Load Image ComfyUI node

Text Encode Hunyuan Video 

This is your usual text prompt. Here, we were using the Flat Color LoRA from Civit AI and had a prompt tailored to work with that LoRA. In general, you can stick to a short description of what you are targeting followed by a few esthetic tags.

Text Econde Hunyuan Video ComfyUI node

Hunyuan Image To Video

This node allows you to set the width and height of the video the workflow will generate. The length is the number of frames. With RIFLEx, you can go up to 257 frames, while without it, it is usually recommended to stay below 129.

Hunyuan Image to video ComfyUI node

Lora Loader Model Only

We’ve added the option to add LoRAs to the workflow. You can select loRAs from the ComfyUI loras folder here (for people using the template, here is a guide on how to add new LoRAs). The strength is a measure of how much the LoRA will impact the generation. At 0 the LoRA will be ignored, and at 1 it will be at its maximum strength.

Lora Loader Model Only ComfyUI Node

Random Noise

Without going into the technical details, the seed is the random part of GenAI. Set it to randomize if you want the workflow to make a completely new output each time. You can also fix it if you find a seed that works well with your other parameters, this will make the results more consistent.

One thing that is good to keep in mind is that the seed has a huge impact on the quality of the current generation of video models. If you are not happy with the results, changing the seed might solve the problem.

Random Noise ComfyUI node

Video Combine

This node combines all the generated frames into a video. The key parameters are the frame rate and the video format. Sometimes, Hunyuan generates very slow or very fast videos. Adjusting the frame rate is a good way to adjust the speed without having to regenerate from scratch.

Video Combine ComfyUI node

And that is all you need to get great results with Hunyuan Video. If you want to get updates on new templates and guides, you can follow us on Twitter/X or join our discord.

You can also upload your own workflow to ViewComfy cloud and access it via the ComfyUI interface or an API.

Next
Next

Integrate ComfyUI Workflows into your apps via API: A Guide to ViewComfy