NeRF, or Neural Radiance Fields, will change VFX forever—without using a single foam dart.

As a general rule, I've been pretty outspoken about my skepticism and relative cynicism about most of the emerging artificial intelligence trends as it relates to art and creativity. Without going too far into specifics, I just think we're going too deep without considering enough of the tangible consequences. Without any end in sight, the train has left the station, and AI is here to stay, and it's learning.


It's getting better, and it can do more and more every single day.

Toy RobotOur robot overlords are evolving.Credit: Rock'n Roll Monkey

Imagine My Horror (and Fascination)

As cynical as I may be, I've still engaged with AI tools, played with them, and looked on with mixed horror and fascination at what they're capable of. At one point, I even created a fake AI-generated person on Instagram (a mix of Timothée Chalamet and Paul Walker) that was going to post exclusively AI-generated landscape photos made to look like they were shot on film to see if anyone noticed they weren't real.

However, I abandoned that terribly gimmicky idea pretty quickly when I realized that AI moved so fast that the novelty wouldn't be there anymore in a matter of weeks. Midjourney can now do this in seconds.

The speed at which these tools are being developed is mind-bending, and there is no telling what the creative landscape will be like in a matter of years or even months at this point. However, I'm not all doom and gloom about AI. There are some bright spots in the mix (Topaz AI tools have become a daily part of my life).

That leads me to what I think is actually the coolest AI thing to come about so far, NeRF (also known as Neural Radiance Fields).

Not the NeRF of Your Youth

This short clip below from NVIDIA landed in my youtube recommendations around eight months ago when it was posted, and it really melted my brain for a moment.

I watched it several times and read the word "Instant NeRF" and thought, "Welp, better keep an eye on that!" Here we are less than a year down the road, and it already looks like it will change VFX and how we do them forever.

Here's why I think NeRF is game-changing, industry-altering, and most of all—super exciting.

What Is NeRF?

NeRF stands for Neural Radiance Fields, which is a fancy way of saying a networked neural engine that is creating new views of scenes based on limited input imagery. You feed it a few images, and it can use the engine to generate a three-dimensional view based on that input imagery and fill in whatever gaps there may be.

The above video by Wren Weichman over at Corridor Digital demonstrates the level of excitement that this new technology warrants. Especially in the realm of VFX, it represents a new era of what's possible and shows a glimpse into image creation pipelines that may be done in the future.

For Star Trek fans, we're getting closer and closer to the holodeck. At least from an image generation standpoint since display tech still has a way to go. 

In the same way that an AI can take all of the imagery available on the internet and generate a picture based on your input, that same AI can take your input imagery and decide what should go in each gap that isn't filled, with enough input imagery is can also make assumptions about reflectivity, brightness, texture, etc.

The implications of NeRF are far-reaching. The above video shows the capability of NeRF to take limited or obstructed datasets (or images) and create unobstructed imagery. It fills in the gaps as needed based on locations or input imagery that there is a large library of data online to pull from (i.e. tourist locations and sculptures that many have taken photos of). This also means you can transfer style, weather, or even lighting from certain input images and apply it to your scene.

All of this results in the ability to create digitized drone scans of entire city skylines with the potential for accurate reflections and lighting, and conditions based on perhaps a few seconds of drone footage. Reflections showing up in your scan (or even being possible) is a particular technological feat worth noting.

These scans you're seeing can generally be done from the ground. Meaning you're able to get what almost looks like a drone shot from a few shots taken from the ground. This is because the AI can tell what the tops of things might have looked like, which is vastly different from other photogrammetry methods.

How Is NeRF Different from Photoscanning?

In the below example, you might notice a few things. First, it's all a little bit smooshed and weird looking. Second, the geometry of the car is relatively intact, and that right there is where the secret sauce is.

If you've ever done a photoscan before, you'll know a few of the limitations and types of things that you cannot scan successfully, the biggest one of those things being any object with any reflections whatsoever. Reflections are not scannable for 3D objects because the traditional photogrammetry methods require points of texture contrast or detail to aid the software in rebuilding the geometry. This makes the use of cross-polarized lenses and things of that sort to get a high-fidelity scan, and even then, your results will vary greatly depending on how shiny your object is.

NeRF doesn't seem to struggle with this as much. Depending on the input imagery and the way the engine has been trained, it seems that NeRF can create geometry and more accurately estimate what surfaces seem to be reflective and which aren't. This means that scanning things like cars would be much more possible than ever before.

The other thing is that when you create a traditional photoscan, the lighting that is currently present in the scene is baked into whatever texture data you generate for the model. So if you grab a scan in the sunlight, then that sunlight will be burnt onto the model, along with its harsh shadows. With NeRF, it appears that it would eventually be possible to use enough input data to change the lighting on your object or apply different conditions altogether. We may not be far from just snagging a google earth NeRF of your front yard and making a little short film without leaving your computer chair.

How Does NeRF Change VFX?

How does NeRF change VFX? The below video shows about ten different ways that it can and will, and that's just in the early phases of the tech.

The most important thing to think about with NeRF is that now we're in the Matrix, we're plugged in, and images can be manipulated in previously unimaginable ways. We have neural engines that can create light and bend images to our will, and we're really at the floor level of everything. The visuals that are currently being generated may not be Avatar 5-level quality just yet, but they can and most likely will be eventually, and that's what has most VFX artists excited.

You can scan night scenes, cloudy scenes, and shiny scenes. You can even take a few photos and create entire worlds out of them. I'd imagine probably buying NeRF scans of things like the Grand Canyon from a marketplace before the next year is through, and it will probably look better and more realistic than anyone has been able to achieve with traditional geometry.

Photoscanning changed everything. It meant that we no longer had to spend a ton of time painstakingly modeling and texturing all types of objects. We could just take a batch of photos and have a highly detailed and textured model in the span of an hour or less.

NeRF is a similar leap in what is possible over the same amount of time. We can now scan entire scenes with accurate lighting and potential reflections just using a few input images, and the AI will fill in the gaps that we weren't able to cover with the camera. This means we can move a camera anywhere in the scene, and it will approximate how that portion of the geometry might have looked.

Also, again, it's worth considering that this is just the ground floor of the whole thing. There's no telling what comes next using this tech.

How to Make Your Own NeRFs

If you want to make your own NeRFs, there's quite a steep learning curve to getting started, but as you'd expect, it's getting easier every single day.

There is even a mobile app now that will allow you to make NeRFs called Luma AI, and they have a web-based platform as well that you have to request access to. However, if you're more of a DIY type, you can download NVIDIA instant NeRF and give it a try here, though I'd definitely recommend watching a tutorial or two first.

Personally, I recommend just waiting for it to get a little bit more mainstream while the tech improves. I've messed around with Luma AI some and so far I heavily prefer that method to the more console command-based versions I've seen and tried to follow a time or two.

It'll only be a matter of months before this stuff is a household name, I'm sure of it.