Last year at the NAB show, Lytro introduced its groundbreaking camera, Lytro Cinema. This year, software developer Foundry (the company behind VFX compositing application Nuke) introduced Elara, a revolutionary platform for VFX production on the cloud. One of Foundry’s main collaborators in the development of Elara was Lytro. This collaboration between two innovative forces may drastically impact the future of visual effects and the VFX industry as a whole.

The end of roto and green screens?

Much has been said in the past year about Lytro's potential to make rotoscoping and green screens completely obsolete. This is more than just a novelty; it signifies a much overdue change in the way visual effects are done. 


I’ll say it loud and clear: rotoscoping and green screens are the lowest low-tech areas of VFX. It seems almost surreal that while technology lets us realistically simulate the infinitely complex behavior of water, or accurately calculate the way millions of sand particles interact with the environment, roto artists are still painstakingly tracing the contours of a subject, manually moving hundreds of points frame by frame. And green screens? Show me one filmmaker who will not be happy to get rid of those bulky, spill-casting and light-reflecting objects once and for all. It's about time to move on. 

Lytro’s primary power is its ability to capture accurate per-pixel depth information. With depth-based separation, you can easily keep or discard areas in the frame according to their distance from the camera. This brings us closer than ever to the VFX promised land: a land without unnatural green screens or tedious rotoscoping. But can Lightfield technology truly overcome the challenges of extraction, and provide an equal (if not better) alternative to the current methods?

Solid objects with well-defined edges are never really a problem to extract, either with roto or by using green/blue screens. But edges are often soft and semi-transparent because of sub-pixel detail, defocus or motion blur, and this is where things become tricky. Extraction is seldom a push-button affair, and it takes skills, time and hard work to produce good-looking composites. Will depth-based extraction suffer from the same shortcomings? Let’s take a closer look at some of these common issues:

Sub-pixel detail 

Wispy, thin strands like hair and fur are very difficult to extract because the detail is so miniscule. Rotoscoping hair is a nightmarish task and the results rarely preserve the detail. Green screen keying works better for sub-pixel detail, but it’s still challenging to get an extraction that does not look chunky or noisy. Depth-based separation is not different in this respect. It is very likely that depth information for sub-pixel detail will be partial or inconsistent, presenting similar challenges to the VFX artists. 

Here comes the good news: Lytro Cinema’s staggering 755 Megapixel sensor means that there’s enough pixel coverage for all but the tiniest detail, which promises an accurate depth channel and a smooth, detailed, noise-free separation.

Shots can be captured fully sharp for optimal separation and manipulation across the entire depth of field.

Defocused edges 

This is always a big challenge of extraction, for two reasons: First, it is notoriously hard to successfully preserve the soft, gradually dissipating edges of defocused elements (this becomes an even bigger challenge when a shot contains both in-focus and out of focus elements that overlap). 

Second, defocused edges are semitransparent, and carry some of the original background information. In the absence of a consistent green or blue background, details of the original background will show through even when that background is replaced. To avoid this, we often roto or extract “to the last solid pixel” and then artificially re-create the defocused edges. This is evidently more of a hack than a streamlined solution. 

That’s where Lytro can really shine: it lets you use the depth information to change the defocus in post. This much-touted feature is not only exciting for DPs and directors, but it is significant for visual effects too. It means that shots can be captured fully sharp for optimal separation and manipulation across the entire depth of field. When the extraction is done, defocus (true optical defocus, not a 2D hack) can be applied at any focal point. 

Motion-blurred edges

Just like with defocus, edges of fast-moving elements are hard to extract and preserve, and carry background information. And just like with defocus, VFX artists often end up trashing the original motion-blur trails and recreating them from scratch in comp. But this is even trickier than recreating defocus, because motion blur must be in sync with the speed and direction of the moving elements. 

On 2D imagery, we can only analyze 2D movement (left-right and up-down). This gets further complicated when overlapping elements have contradicting motion. Imagine two actors involved in a furious fight while being shot with a hectic hand-held camera. There are so many contradicting and overlapping movements that any attempt to generate coherent motion vectors in 2D will end up a complete mess. 

Here again, Lytro’s capabilities are very promising. At rates of up to 300 frames per second, it can shoot with practically zero motion blur. But unlike a standard digital camera, Lytro lets you re-apply true 3D accurate motion blur down to the single pixel level. This lets you retime the footage back to normal speed and add back motion blur for a true 24 FPS feel.

We are quickly reaching a point where even large VFX facilities are struggling to keep up with file sizes.

So, what’s the catch?

It is quite clear that Lightfield cinematography represents major advantages to the VFX workflow, and is bound to replace cumbersome low-tech separation methods like rotoscoping and green screens. Let’s put aside the fact that Lytro Cinema is still a bulky, expensive prototype and not yet a practical solution for the average film production (let alone indie filmmakers). Looking back at the evolution of digital cameras and computing technologies in the past 10 years, we can assume that Lightfield technology will become more affordable (and portable) in a matter of a few years. 

The big catch is file sizes. Enormous file sizes. The combination of extremely high frame rates, very high resolution and multiple passes with loads of information per frame means that the entire pipeline—from capture through VFX to post—has to deal with massive, debilitating amounts of data. And this is where Elara steps in.

VFX on the cloud 

The problems of dealing with very large file sizes did not start with Lytro. Only a few years ago the standard resolution for visual effects was 2K, but VFX houses today are often asked to work on 4K, 6K and even 8K frames. VR and 360-degree projects require even higher resolutions. Manipulating and viewing shots at these resolutions is painfully slow, server speeds and storage capacities cannot keep up with the hefty file sizes, and rendering times of CG scenes rise exponentially. We are quickly reaching a point where even large VFX facilities are struggling.

Foundry’s Elara is cracking the limitations of existing VFX pipelines by offering a new paradigm—the entire pipeline, including the software, storage and rendering processors, sits on the cloud. VFX artists work remotely through a standard web browser, and they can do practically anything using an average laptop, or even a tablet (fast internet is pretty much the only requirement on the artist side).

ElaraNuke Studio in Elara

Assets are all kept in one single place and are easily shared between production, editorial, and the VFX facilities. For example, in a Lytro session, the raw material will be uploaded directly to the cloud, and the VFX artists will do their work there, without ever downloading the material locally. Artists will be able to take advantage of the immense storage and computing power that the cloud offers. Rather than investing in static hardware, VFX companies will pay for as much or as little as they need for any given project. It’s a scalable, pragmatic approach that opens a door to exciting new opportunities. 

The future of VFX

Elara’s cloud-based platform is bound to change the VFX industry, and these changes will affect the way filmmakers work with visual effects. Directors and producers often feel limited by the fact that the VFX are done in some remote facility, far from the daily interaction of the editorial team. But with the entire workflow moving to the cloud, fixed hardware and physical facilities will not matter much. Filmmakers will be able to assemble an in-house VFX team that could sit anywhere, even right next to the editors or in the production offices. 

This could also be a big boon for indie filmmakers who cannot afford the high overhead costs of large VFX facilities. Furthermore, Elara will make collaboration between individuals or companies across the globe smooth and straightforward. With all the assets centralized and easily accessible, and no downtimes for downloads and uploads, anyone from anywhere could contribute to the workflow. 

Depth_screenUsing Lytro depth screens in Elara

As I mentioned, perhaps the most significant promise of Elara is the quasi-unlimited storage and computing power that will allow VFX artists to maximize the potential of future cameras like Lytro Cinema. Sure, it will take some time before Lightfield cinematography becomes mainstream, and green screens and rotoscoping are not going to disappear overnight. But eventually, they will. 

When Lightfield (and maybe other technologies?) becomes a standard feature of production video cameras, filmmakers will be free of the limitations of green and blue screens. VFX shots will be easier and faster to set up, and will require no special lighting or rigging. VFX tasks like camera tracking, rotoscoping and keying will not be needed anymore, which will shorten post-production times, drive costs down, and allow filmmakers to shift their budgets toward more elaborate visual effects. 

With Lytro and Elara taking center stage in the world of visual effects, big changes are coming. Get a front row seat, it’s going to be interesting. See Lytro and Elara's presentation from NAB below:

Eran Dinur is the author of The Filmmaker's Guide to Visual Effects and Senior Visual Effects Supervisor at Brainstorm Digital. See his work here, and follow his book The Filmmaker's Guide to Visual Effects on Facebook.