April 22, 2014

Illum is Version 2.0 of the Lytro Camera You Can Refocus After Shooting, Video Might Be Next

Lytro Illum Front AngleLytro introduced their light-field camera a little over 2 years ago, and if you haven't heard of it, the technology inside lets you change the focus point of your images after you've already taken the shot. Part hardware and part software, the tech has been steadily improving (with other companies like Toshiba getting in on the action). The company has now introduced a brand new version called Illum that is capable of much higher quality than the previous camera, and it also takes the shape of a more traditional DSLR/mirrorless camera for better usability. Besides the possibilities with still images, we've also got word that they are working on using the technology for video capture as well.

https://www.youtube.com/watch?v=P5q8XdhBAZg

Here are some shots of the new design, which includes a large screen and non-interchangeable zoom lens:

Lytro Illum Light Field Camera Front-Back

Here are the specs of the new Illum (see more of the specs here):

  • Light Field Resolution: 40 Megaray CMOS Sensor (previous version was 11 Megaray)
  • 2D Printed Resolution: 4 Megapixels (previous version was a little over 1)
  • 9.5 - 77.8 mm f/2.0 Constant Autofocus Lens (30 - 250mm -- 35mm equivalent)
  • Crop Factor 3.19
  • Macro Focus to 0 mm from lens front
  • Articulated 480 x 800 4" Touchscreen
  • Focal Plane Shutter/Fastest Shutter Speed: 1/4000 sec
  • SD Card
  • Live view and Playback with Light Field Refocus
  • ISO compatible hot shoe with center pin sync manual and Lytro-TTL
  • Tripod Socket Standard 1/4"-20
  • Removable Li-Ion battery
  • USB Micro USB 3.0
  • Weight: 940 grams / 33.15 oz / 2.07 lbs
  • Availability: July 2014
  • Price: $1,600 or $1,500 with $250 Pre-Order Deposit

The camera also has a new "Lytro" button:

During image capture an interactive depth feedback display shows the relative focus of all objects in the frame, allowing composition in three dimensions. A real-time color-coded overlay of the live view lets you know which elements of the picture are within the re-focusable range.

If you want to mess with some photos taken with the Lytro camera:

Here is more about "Light Field" technology from a terrific article by The Verge:

Light-field photography has been discussed since the 1990s, beginning largely with three Stanford professors, Marc Levoy, Mark Horowitz, and Pat Hanrahan. (The term "light field" was first coined in 1936, and Gabriel Lippmann created something like a light-field camera in 1908, though he didn’t have a name for it.) Instead of measuring color and intensity of light as it hits a sensor in a camera, light-field cameras pass that light through a series of lenses (hundreds of thousands in Lytro’s case), which allows the camera to record the direction each ray of light is moving. Understanding light’s direction makes it possible to measure how far away the source of that light is. So where a traditional camera captures a 2D version of a scene, a light-field shot knows where everything in that scene actually is. A processor turns that data into a 3D model like any you’d see in a video game or special effect, and Lytro displays it as a photograph. It’s a little bit like the small bots in Prometheus, spatially mapping an entire room in order to display it back later. Or think of it as a rudimentary holodeck, projecting a simulated scene that changes as you move through and interact with it.

And where this technology is headed for video:

"If you look at a big-budget Hollywood production today, they’ll spend between 9 and 14 million dollars on just incremental hardware to shoot 3D, because you need multiple rigs. We can do all that in single-lens, single-sensor — that’s a big deal," Rosenthal says. "You look at the credits at the end of a movie and you see Camera Assistant 1, Camera Assistant 2, Camera Assistant 3… they’re doing focus pulls on set. If you can make that an after-the-fact decision, that’s a pretty big deal." Of course to achieve that in practice and not just theory, Lytro would need to make a camera that records video. But that’s on the roadmap: "That’s something that largely gets solved as computational power continues on its Moore’s law rate of increase." Processors double in speed every two years, Moore says; Lytro’s perfectly positioned to take advantage of every increase.

Now, even though the megapixels are still low compared to most cameras (when the Lytro pictures are made 2D), the purpose of this camera is really to create interactive pictures. The video above gives a pretty good idea of how these can be used in motion right now, by creating an effect that guides the viewer's eye through the picture with different focus points. As the resolution increases, we are going to see more and more applications for this kind of technology, especially as it relates to motion capture. Other companies are already working on their own light-field sensors, but we aren't too far off from this being a very real tool for filmmakers -- especially with companies like RED claiming to be working on similar technology.

Find out more about the camera and the tech, and pre-order one over on their website.

Link: Lytro Illum

[via The Verge & Mashable & FStoppers]

Your Comment

21 Comments

each picture is probably like 40MB..but still awesome!

April 22, 2014 at 1:56PM, Edited September 4, 11:56AM

3
Reply
kevin

The voiceover is so hipster it hurts.

April 22, 2014 at 2:29PM, Edited September 4, 11:56AM

2
Reply
Natt

Dammit why did i choose camera assistant as a career, looks like im in the same boat as factory workers and super market checkouts, replaced by robots!!

April 22, 2014 at 3:52PM, Edited September 4, 11:56AM

0
Reply
zakb

Even with the latest technology its still a job for a camera assistant. The ... can't remember the name.. focus technology they were showing at NAB does not work without someone setting it up and assigning points of focus. Plus the matt box, filters, lens changes, cam settings etc is still something that even the most imdapendant DPs like to be handled by someone else if it gives them more time to be artistic with the shots. Dont worry brother your not done yet.

April 22, 2014 at 6:02PM, Edited September 4, 11:56AM

0
Reply

April 22, 2014 at 7:18PM, Edited September 4, 11:56AM

0
Reply
avatar
Joe Marine
Camera Department

And just about everyone in post!

April 23, 2014 at 9:07AM, Edited September 4, 11:56AM

0
Reply
jorge

Light-field sensors, Motion Capture Follow Focus systems, CGI Audry Hepburns and Deniros, etc...in 50 years the last vestigial analogue filmmaking tool: The LENS, will probably become an artistic choice rather than a requirement..

April 22, 2014 at 4:06PM, Edited September 4, 11:56AM

0
Reply
Fredo

*Audrey

April 22, 2014 at 4:09PM, Edited September 4, 11:56AM

0
Reply
Fredo

in 50 years time VHS and Bata cam will be soooo cool... Delivering a DVD to a client will be vintage.

April 22, 2014 at 6:04PM, Edited September 4, 11:56AM

0
Reply

Keep your 5D mk3 ... it will be retro.

April 22, 2014 at 6:05PM, Edited September 4, 11:56AM

0
Reply

Delivering DVDs isn't already vintage?

April 22, 2014 at 10:15PM, Edited September 4, 11:56AM

4
Reply
Travis

I used DVDs long before it was hip and cool!

April 23, 2014 at 4:06AM, Edited September 4, 11:56AM

0
Reply
Natt

Cool! It seems to me that (when video comes out) you could create a program that could read the info taken, and allow you to remove foreground/background info at any point in the image... so no more green-screens? Compositing would be a piece of cake and open up the doors for so many more creative possibilities.

April 22, 2014 at 9:11PM, Edited September 4, 11:56AM

3
Reply
Travis

The amount of light (particles/waves) coming through a lens is so large that manipulation would not effect percievable resolution in the slightest. Theoretically speaking one could take portions of the light in discreet packages and manipulate it in almost whatever way you desire in terms of focal length, focus. This may not require glass to do it but other forms of manipulation through electronics or fields. So a "lens" might cover any focal length and depth of field. It's all the same light coming in anyway.

April 22, 2014 at 10:11PM, Edited September 4, 11:56AM

0
Reply
JPS

unlike a good AC and a $30,000 prime, this never does get very sharp

April 23, 2014 at 12:25AM, Edited September 4, 11:56AM

0
Reply
bestdp4life

Not yet. But the tech will improve and ultimately surpass human expertise... as it always has;-)

April 23, 2014 at 9:09AM, Edited September 4, 11:56AM

0
Reply
jorge

O.k so it all boils down to the accuracy of the depth maps that this camera is able to generate.

The last time I saw the depth maps generated by their previous gen camera they were completely inaccurate with a lot of nasty artifacts. So consider me very skeptical that this tech will ever work properly + the authors of the tech seem to desperately try to hype up the tech with larger than life claims.

+ a single camera can never do a proper stereoscopic 3D video. So the talk from the Pelican guy is complete and utter rubbish.

Why? Well because even the largest distance between the parts of this "multi-sensor" is still way smaller than the distance between the eyes of a human. That means they would have to apply a sort of a content-aware fill to fake detail behind objects which will create nasty nasty artifacts while considering the accuracy of the depth maps which seem to land somewhere in the town of "not usable at all".

When it comes to faking stereoscopic 3d according to a depth map - Oculus Rift SDK now contains something similar called "Time-warping" - in short it's a method of faking the 3D effect thanks to the availability of a 100% accurate depth maps (and almost 100% accurate gyro sensor data).

April 23, 2014 at 2:00AM, Edited September 4, 11:56AM

0
Reply
Peter

+ LOL - say goodbye to taking pictures of refractive and reflective objects because the cam will never be able to handle this based on this tech. Why? Well because you would have to somehow magically cook up the depth maps that would accurately catch the 2d depth information of the surface of the reflective/refractive objects

The only folks who were able to compensate for this reflective/refractive effect used a pulsed laser and a DSLR to detect reflections and refractions (don't remember exactly but I guess it was the same MIT team that created the "femtosecond photography".

The pelican guys were just bat-shit crazy to ever suggest the whole multi-sensor + refocusing post processing is a good idea. Maybe they're just releasing this cam to show the investors "hey, we're doing something with your money!"

April 23, 2014 at 2:10AM, Edited September 4, 11:56AM

0
Reply
Peter

Is there a dual lens (stereoscopic) system for the same purpose? I think it has one "main" lens and the other auxiliary to create a "real" 3D that can also be manipulated in post.

April 23, 2014 at 8:06PM, Edited September 4, 11:56AM

0
Reply
DLD

No Freaking Way!

April 23, 2014 at 4:40AM, Edited September 4, 11:56AM

2
Reply
Inna

Has anyone used this with AE for a music video or something? You see a lot of animated stills and time freeze things, this would be the ultimate. Any thoughts on AE integration of sorts?

April 23, 2014 at 5:08PM, Edited September 4, 11:56AM

1
Reply
Kent