May 24, 2017

How Two Companies are Drastically Altering the Future of Visual Effects

Future of VFX
Green screens and rotoscoping are soon to be a thing of the past, but what’s the catch?

Last year at the NAB show, Lytro introduced its groundbreaking camera, Lytro Cinema. This year, software developer Foundry (the company behind VFX compositing application Nuke) introduced Elara, a revolutionary platform for VFX production on the cloud. One of Foundry’s main collaborators in the development of Elara was Lytro. This collaboration between two innovative forces may drastically impact the future of visual effects and the VFX industry as a whole.

The end of roto and green screens?

Much has been said in the past year about Lytro's potential to make rotoscoping and green screens completely obsolete. This is more than just a novelty; it signifies a much overdue change in the way visual effects are done. 

I’ll say it loud and clear: rotoscoping and green screens are the lowest low-tech areas of VFX. It seems almost surreal that while technology lets us realistically simulate the infinitely complex behavior of water, or accurately calculate the way millions of sand particles interact with the environment, roto artists are still painstakingly tracing the contours of a subject, manually moving hundreds of points frame by frame. And green screens? Show me one filmmaker who will not be happy to get rid of those bulky, spill-casting and light-reflecting objects once and for all. It's about time to move on. 

Lytro’s primary power is its ability to capture accurate per-pixel depth information. With depth-based separation, you can easily keep or discard areas in the frame according to their distance from the camera. This brings us closer than ever to the VFX promised land: a land without unnatural green screens or tedious rotoscoping. But can Lightfield technology truly overcome the challenges of extraction, and provide an equal (if not better) alternative to the current methods?

Solid objects with well-defined edges are never really a problem to extract, either with roto or by using green/blue screens. But edges are often soft and semi-transparent because of sub-pixel detail, defocus or motion blur, and this is where things become tricky. Extraction is seldom a push-button affair, and it takes skills, time and hard work to produce good-looking composites. Will depth-based extraction suffer from the same shortcomings? Let’s take a closer look at some of these common issues:

Sub-pixel detail 

Wispy, thin strands like hair and fur are very difficult to extract because the detail is so miniscule. Rotoscoping hair is a nightmarish task and the results rarely preserve the detail. Green screen keying works better for sub-pixel detail, but it’s still challenging to get an extraction that does not look chunky or noisy. Depth-based separation is not different in this respect. It is very likely that depth information for sub-pixel detail will be partial or inconsistent, presenting similar challenges to the VFX artists. 

Here comes the good news: Lytro Cinema’s staggering 755 Megapixel sensor means that there’s enough pixel coverage for all but the tiniest detail, which promises an accurate depth channel and a smooth, detailed, noise-free separation.

Shots can be captured fully sharp for optimal separation and manipulation across the entire depth of field.

Defocused edges 

This is always a big challenge of extraction, for two reasons: First, it is notoriously hard to successfully preserve the soft, gradually dissipating edges of defocused elements (this becomes an even bigger challenge when a shot contains both in-focus and out of focus elements that overlap). 

Second, defocused edges are semitransparent, and carry some of the original background information. In the absence of a consistent green or blue background, details of the original background will show through even when that background is replaced. To avoid this, we often roto or extract “to the last solid pixel” and then artificially re-create the defocused edges. This is evidently more of a hack than a streamlined solution. 

That’s where Lytro can really shine: it lets you use the depth information to change the defocus in post. This much-touted feature is not only exciting for DPs and directors, but it is significant for visual effects too. It means that shots can be captured fully sharp for optimal separation and manipulation across the entire depth of field. When the extraction is done, defocus (true optical defocus, not a 2D hack) can be applied at any focal point. 

Motion-blurred edges

Just like with defocus, edges of fast-moving elements are hard to extract and preserve, and carry background information. And just like with defocus, VFX artists often end up trashing the original motion-blur trails and recreating them from scratch in comp. But this is even trickier than recreating defocus, because motion blur must be in sync with the speed and direction of the moving elements. 

On 2D imagery, we can only analyze 2D movement (left-right and up-down). This gets further complicated when overlapping elements have contradicting motion. Imagine two actors involved in a furious fight while being shot with a hectic hand-held camera. There are so many contradicting and overlapping movements that any attempt to generate coherent motion vectors in 2D will end up a complete mess. 

Here again, Lytro’s capabilities are very promising. At rates of up to 300 frames per second, it can shoot with practically zero motion blur. But unlike a standard digital camera, Lytro lets you re-apply true 3D accurate motion blur down to the single pixel level. This lets you retime the footage back to normal speed and add back motion blur for a true 24 FPS feel.

We are quickly reaching a point where even large VFX facilities are struggling to keep up with file sizes.

So, what’s the catch?

It is quite clear that Lightfield cinematography represents major advantages to the VFX workflow, and is bound to replace cumbersome low-tech separation methods like rotoscoping and green screens. Let’s put aside the fact that Lytro Cinema is still a bulky, expensive prototype and not yet a practical solution for the average film production (let alone indie filmmakers). Looking back at the evolution of digital cameras and computing technologies in the past 10 years, we can assume that Lightfield technology will become more affordable (and portable) in a matter of a few years. 

The big catch is file sizes. Enormous file sizes. The combination of extremely high frame rates, very high resolution and multiple passes with loads of information per frame means that the entire pipeline—from capture through VFX to post—has to deal with massive, debilitating amounts of data. And this is where Elara steps in.

VFX on the cloud 

The problems of dealing with very large file sizes did not start with Lytro. Only a few years ago the standard resolution for visual effects was 2K, but VFX houses today are often asked to work on 4K, 6K and even 8K frames. VR and 360-degree projects require even higher resolutions. Manipulating and viewing shots at these resolutions is painfully slow, server speeds and storage capacities cannot keep up with the hefty file sizes, and rendering times of CG scenes rise exponentially. We are quickly reaching a point where even large VFX facilities are struggling.

Foundry’s Elara is cracking the limitations of existing VFX pipelines by offering a new paradigm—the entire pipeline, including the software, storage and rendering processors, sits on the cloud. VFX artists work remotely through a standard web browser, and they can do practically anything using an average laptop, or even a tablet (fast internet is pretty much the only requirement on the artist side).

Elara
Nuke Studio in Elara

Assets are all kept in one single place and are easily shared between production, editorial, and the VFX facilities. For example, in a Lytro session, the raw material will be uploaded directly to the cloud, and the VFX artists will do their work there, without ever downloading the material locally. Artists will be able to take advantage of the immense storage and computing power that the cloud offers. Rather than investing in static hardware, VFX companies will pay for as much or as little as they need for any given project. It’s a scalable, pragmatic approach that opens a door to exciting new opportunities. 

The future of VFX

Elara’s cloud-based platform is bound to change the VFX industry, and these changes will affect the way filmmakers work with visual effects. Directors and producers often feel limited by the fact that the VFX are done in some remote facility, far from the daily interaction of the editorial team. But with the entire workflow moving to the cloud, fixed hardware and physical facilities will not matter much. Filmmakers will be able to assemble an in-house VFX team that could sit anywhere, even right next to the editors or in the production offices. 

This could also be a big boon for indie filmmakers who cannot afford the high overhead costs of large VFX facilities. Furthermore, Elara will make collaboration between individuals or companies across the globe smooth and straightforward. With all the assets centralized and easily accessible, and no downtimes for downloads and uploads, anyone from anywhere could contribute to the workflow. 

Using Lytro depth screens in Elara
As I mentioned, perhaps the most significant promise of Elara is the quasi-unlimited storage and computing power that will allow VFX artists to maximize the potential of future cameras like Lytro Cinema. Sure, it will take some time before Lightfield cinematography becomes mainstream, and green screens and rotoscoping are not going to disappear overnight. But eventually, they will. 

When Lightfield (and maybe other technologies?) becomes a standard feature of production video cameras, filmmakers will be free of the limitations of green and blue screens. VFX shots will be easier and faster to set up, and will require no special lighting or rigging. VFX tasks like camera tracking, rotoscoping and keying will not be needed anymore, which will shorten post-production times, drive costs down, and allow filmmakers to shift their budgets toward more elaborate visual effects. 

With Lytro and Elara taking center stage in the world of visual effects, big changes are coming. Get a front row seat, it’s going to be interesting. See Lytro and Elara's presentation from NAB below:

Eran Dinur is the author of The Filmmaker's Guide to Visual Effects and Senior Visual Effects Supervisor at Brainstorm Digital. See his work here, and follow his book The Filmmaker's Guide to Visual Effects on Facebook.

Your Comment

9 Comments

There is an old business adage, if you want to affect change (buying pattern / product in this case) you have to be either: better, cheaper, or faster. Likely 2 of the 3. To be a game changer, you must have a big enough market for your "whatever" and you need do all three. This does not pass the test.....at least not yet!

May 24, 2017 at 11:29AM, Edited May 24, 11:45AM

0
Reply

A the cloud, eventually, "All of you are belong to us"...

May 24, 2017 at 12:45PM

11
Reply
Vidrazor
320

Eran, thanks for taking the time to share this information with us.

I apologize for seeming negative but some of what you've written is slightly misleading. Lytro seems like a really cool product but there are a number of massive shortcomings that have yet to be addressed.

You mentioned a handheld action shot and being able to easily manipulate it in post with Lytro. Have you seen any handheld shots with this Lytro Cinema Camera? No. No one has. A huge question mark surrounds the ability of the Lytro camera to do anything other than a nearly locked off, entirely studio controlled shoot.

Have you seen a wide angle lens on a Lytro camera? No. No one has. How would you shoot a handheld action scene without a wide angle lens?

Take an objective look at the image quality of the Lytro camera. It looks like pre-Genesis quality footage. The camera that no one used because it looked very bad. Despite the fact they're trumpeting 16 stops of DR, all of the shots they've shown thus far look very, very not good in terms of pure image quality.

A number of cloud based rendering services already exist and prices associated with most of them are out of reach of most indie filmmakers. I understand that Elara has more features than just rendering but why should we expect anything other than incredibly high prices aimed at high end productions?

I get that these are really good new pieces of technology and MAYBE someday they will trickle down to indie filmmakers. But in the meantime you can spend 10k for a Blackmagic Ursa Mini and lenses and get better image quality than the Lytro. Thanks again for the article, Eran.

May 24, 2017 at 1:29PM, Edited May 24, 1:29PM

0
Reply

Dan, thanks for the comments, all good points, allow me to address some of them.

This article is about the future of VFX. Lytro is a first step, and yes, it's a bulky and expensive prototype, like I said in the article. But the potential of shooting with pixel-accurate depth is too important for VFX to ignore or even dismiss because Lytro Cinema is just a prototype. Like other VFX professionals, I have been waiting for years for such technology. And I don't know if Lytro themselves will carry the torch or some other maker will find a better (or cheaper) way of doing it but eventually it will be mainstream. It has too.

Filmmakers were very suspicious of digital cameras for at first. Film and video were miles apart in terms of quality. Now everyone is using digital cameras except a few purists. The tech gets better, sizes get smaller, competition drives prices down, eventually.

Elara is not a beefed up cloud rendering service. It's a completely different concept. The entire VFX facility sits on the cloud - and you can do anything (modeling, texturing, lighting, compositing, simulations, rendering, editing) with it. There are talks that Autodesk (Maya, 3ds Max) is also thinking of making their products available through Elara.

I don't know what the pricing will be - but I doubt that Foundry would be willing to invest in it without thinking of a reasonable, sustainable business model. Which means more than just a handful of customers.

Let's give things some time. I believe these developments will become mainstream.

May 24, 2017 at 3:23PM

0
Reply
avatar
Eran Dinur
Visual Effects Supervisor
109

Wow.. Lytro is truly the future for film making. I'm blown away by the ability to capture and reproduce effects holographically.

May 25, 2017 at 4:02AM

0
Reply
avatar
William Lee
Produer, Director, DP, Writer
81

Professionals place their focus on the day right where they want it and never want it anywhere but there.

Lytro got caught up in the "can we do this" I think, rather than look into if people actually want it. Load any movie or TV show and try and find a shot that you think the director or cameraperson might like to change the focus at a later date???

Its not worth the expense, size and data management. Perhaps there might be some very niche use for it somewhere but I've never shot anything that I've later wished I had focused on something else in the frame.

May 25, 2017 at 7:43PM, Edited May 25, 7:43PM

0
Reply

Actually zLense got a NAB Best of Show Award for introducing its real-time zKey™ 3D keyer which can do keying in real-time in HD without any solid color background, based on depth.

http://zlense.com/zkey-greenles-keying/

https://koscsof1.com/2017/04/19/zlense-demonstrates-breakthrough-zkey-3d...

https://youtu.be/p4wilkSlfjc

The product should be available later this year.

Ferenc Koscso
zLense

May 28, 2017 at 2:51AM

1
Reply

"Artists will be able to take advantage of the immense storage and computing power that the cloud offers."

"The cloud" doesn't offer any particular level of performance or storage capacity. "The cloud" could be a 10-year-old laptop acting as a server, a thousand miles away.

And if the data load is so massive that it's crippling even for local storage, how is uploading it to "the cloud" and attempting to work on it over an Internet connection viable?

This is how service providers get you to upload your material and then hold it hostage in yet another software-rental scheme.

May 28, 2017 at 6:16AM

0
Reply
David Gurney
DP
1396

""The cloud" doesn't offer any particular level of performance or storage capacity."
- Yes it does. And the great thing is that you can choose that particular level of performance and storage capacity based on your needs and your budget.

" The cloud could be a 10 year-old laptop acting as a server, a thousand miles away"
- Look at existing cloud-based rendering services and the type of processors you can choose from. I think you'll be relieved to see that they are, how to put it, just a bit better than 10 year old laptops... And yes, some are a thousand miles away. I guess that's why they invented the internet!

"And if the data load is so massive that it's crippling even for local storage, how is uploading it to "the cloud" and attempting to work on it over an Internet connection viable?"
- Ah, well that's the whole point of Elara, which I explained in the article. To summarize, the entire facility sits on the cloud. Which means you can use more processing power and more memory than your in-house workstations, servers and render farm. You will not run any software locally. You will be operating the software remotely through a web browser.

"This is how service providers get you to upload your material and then hold it hostage in yet another software-rental scheme."
- Maybe you are right. But I would much rather get excited about innovative ideas then dismiss them by way of bitterness and suspicion.

May 29, 2017 at 1:07AM, Edited May 29, 1:15AM

0
Reply
avatar
Eran Dinur
Visual Effects Supervisor
109