With programs like Blender and Fusion now either free or super affordable, slapping some sick VFX on your next project isn’t a pipe dream anymore. Even Epic has made its new Quixel Megascans free for some creatives (and affordable to everyone else).
But one thing can still be an obstacle: The dreaded render, especially if you’re scene has a lot of particles, animations, and model details.
While you can optimize the scene and/or do it in passes, there is an AI tool that can help speed things up. Let's get into it.
Render Once, Duplicate Frame Rate
While his video title claims you can get 300% faster renders in Blender Cycles, the idea is more AI trickery on the backend than an actual speed boost to Blender. However, the numbers still check out.
Kaizen's VFX SequenceCredit: Kaizen
The idea behind the workflow utilizes the "Step" feature in the Frame Range setting of Blender. To put it simply, you can tell Blender to render out every other frame, instead of every frame. The controls get a lot more granular, but for this example, that’s the basic idea.
Frame Range is found under Output Properties
Technically, you are increasing your render time by using this technique, but you’re also removing half the frames.
So, how do we get them back?
Well, this is where FlowFrames comes in. This AI tool interpolates new frames into your sequence by using its AI models.
In Kaizen’s example, he outputted an eight-second 30fps render with a step of 2. This in effect created a video at 15fps. After that, Kaizen used FlowFrames to generate new frames for his sequence, which only took about half a minute.
Kaizen's FlowFrames SettingsCredit: Kaizen Tutorials
So yes, you can triple your Blender render speeds using AI. However, it happens in FlowFrames and not actually Blender, but we’re just splitting hairs at this point.
A Few Questions
Interpolating isn’t a new technique. It happens when ramping your footage and resizing your footage into a different resolution, but in this instance, it’s replacing the work your VFX system is doing.
But let’s ask some questions. Are these new frames want you created in your render or are they just an interpretation? Well, both. FlowFrame is using the frame before and after the ‘missing frame’ and creates an entirely new one to stick those two together.
What would happen in the scene has a lot of movement or objects? We all know how wonky AI-generated images can get. Thankfully, we can see the results from Kaizen’s examples. The AI-supported sequence is practically identical. I’m sure whatever differences I’m seeing are just my eyes playing tricks.
But what do you see?
Using this technique probably won’t be the right fit for every project need. VFX are usually rendered out as image sequences, which is a standard workflow. However, according to Kaizen’s FlowFrame tutorial, you’d need to a video file as an input.
Because of that, this won’t be a perfect plug-and-play solution and you’ll have to run tests to see what works best for you.
Also, if you render out your scene using passes, I’m not sure if FlowFrame would be able to recognize your alpha channel or transparency. So, I would recommend doing some tests.
Having said that, FlowFrames does cut your global render times by a crazy amount. Whether you’re outputting sequences to building motion graphics or you need a quick background plate, thiy is a novel solution.
But we want to hear from all our VFX readers! What do you think about using FlowFrame in your workflow? Do you see any pitfalls? Are there better solutions? Let us know in the comments!
Source: Kaizen Tutorials