Description image

Meet KaleidoCamera, a Plenoptic Add-On that Brings Amazing New Functionality to Your DSLR

08.16.13 @ 1:30PM Tags : , , , , ,

kaleidocamera-452x273Modern imaging technology never ceases to amaze me. In the past, we’ve talked about light field cameras, such as the Lytro, which allow the user to refocus the image in post. We’ve also talked about new sensor technologies that use color diffraction instead of traditional filtration. Now, a group from Saarland University in Saarbrücken, Germany have developed a camera add-on that sits between the lens and the sensor that brings an array of impressive plenoptic features to any DSLR camera. What features. you might be asking? Light field refocusing, HDR imaging, and light polarization without filters, just to name a few. Interested? Read on for the details:


So what exactly is the KaleidoCamera, and what does it do? Well, in short, it’s an optical accessory that sits between your lens and your camera body that creates plenoptic information through a system of mirrors and filters.

The channels of information can then be manipulated independently of each other in order to create various effects, such as HDR (without having to combine separate exposures), multispectral imaging, filter-less polarization, and of course, light field imaging for management of depth information.

KaleidoCamera Results

The KaleidoCamera accomplishes this impressive list of tasks through a process in which light is broken into various elements within the add-on, and then re-combined before reaching the camera’s sensor.

Essentially, when light enters the KaleidoCamera, it hits a kaleidoscopic element that creates multiple copies of the information, which are then individually modified by different filters before being re-combined into a single image.

Here’s a demonstration video from the KaleidoCamera’s creators that shows some of what the device is capable of:

This device seems to have quite a few applications in both the photographic and scientific communities. However, the ability to manipulate plenoptic depth information could potentially be huge for filmmakers, especially those doing extensive green screen work.

If the KaleidoCamera allows various depth cues to be removed from a shot, then it’s entirely possible to be able to separate actors based on their distance from camera rather than through keying out a certain color. If that’s the case, then green screens could become a thing of the past.

One of the biggest drawbacks at this moment is the fact that the device seems to be diminishing the visual quality of the image. In the video, it’s mentioned that, at the moment, it’s producing slightly less than full HD results. However, considering the fact that the KaleidoCamera is still in the early stages of development, it seems likely that image quality will get a boost as the product matures.

More in-depth information about the KaleidoCamera and how it works can be found in this scientific paper.

What do you guys think? Is the KaleidoCamera going to revolutionize photography or filmmaking? What other applications might this device be able to achieve? Let us know in the comments!

Links:

Related Posts

  1. Want Adjustable Depth of Field in Post? You Don't Need Lytro, Now You Can Use a DSLR
  2. Meet Sony's New Budget Mirrorless Camera, the NEX-3N, and a New SLT, the A58

COMMENT POLICY

We’re all here for the same reason: to better ourselves as writers, directors, cinematographers, producers, photographers... whatever our creative pursuit. Criticism is valuable as long as it is constructive, but personal attacks are grounds for deletion; you don't have to agree with us to learn something. We’re all here to help each other, so thank you for adding to the conversation!

Description image 23 COMMENTS

  • My brain exploded about one minute into that video. I know he was speaking English, but I think I only understood about 20% of that information. Darn you public schools!!!

  • Kareem Kamahl on 08.16.13 @ 1:50PM

    I will wait for them to develop the product more but I am definitely interested!

  • It seems like their different features could be useful for different areas of employment. High HDR to the high end/pro users (although that Magic Lantern high ISO/Low ISO HDR solution seems adequate for now). The refocusing is mostly for the low end consumer level and smart phones (Canon 70D AF system is arguably a better system for the “enthusiast” tier). And the high budget production will need a perfected design, which may be years away (plus, the CGI monster flicks are going away or so we had been told by Spielberg and Lucas)

  • After i finished watching this video, I realized my nose was bleeding. I think that was too much info at one time.

  • They post their samples at 360P? Even if it isn’t delivering full HD quality just yet, at least they should give it the best that is available. Theory is great, but the proof is in the pudding! I would bet that this type of image manipulation will never deliver the image quality of a lens that is directly projecting an image of light on the sensor. But I hope I’m wrong, as it would be great to have such capabilities! Oh, and they never mention about what effect this device has on the speed of a lens.

    • Rodrigo Molinsky on 08.16.13 @ 2:22PM

      +1

      I was waiting the time they’d say about decreasing some stops.

    • One of the alternative technologies is in the new Nokia phone. When you have a 41 megapixel sensor, you can then pick/magnify any portion of the screen down to 5 MP or so and it will already be in focus.

      • ok.. is this a plug? a Nokia phone having 41 megapixels has absolutely nothing to do with, and is not even in the same league as an add-on that simulates wider apertures, High Dynamic Range in one shot, and a whole bunch of other awesome stuff that I barely understood!

  • awesome i knew someono would invent some kind of filter to increase dynamic range ..

  • It’s really not that hard to follow along with the explanation of the technology but I will admit the actual calculations involved to achieve this are far beyond my comprehension. There are some amazing possibilities with this tech, potentially, but at the moment the image quality leaves a lot to be desired. The fact that they have to run the image through a diffusion filter just kills the sharpness completely.

    I hope they can refine this.

  • Do you suppose that light field photography could be combined with pupil tracking so that what you look at is always in focus on the screen? So in a 3d film you could decide to look at something in the back of the scene and it would adapt instantly to what you were looking at? This would simultaneously increase realism and take a tool away from cinematographers who could no longer control what you saw in focus, but it would be very very interesting all the same. I think it is one of the barriers to convincing 3d too, because in real life, your eyes forced to focus on a given plane.

    • Please do it!!!

    • This is exactly the first thing I was thinking of too when I heard of light field photography too!

      Then maybe using 3d combined with screen goggles that track head movement and move around an hd image within a bigger image

  • just show me how to use it, and show me the results. Not too interested in the science behind it just yet.

  • Tremendous possibilities. Very cool.

  • The ultimate “fix this in post” crutch.

  • David Sharp on 08.16.13 @ 7:08PM

    Very Cool, but I wonder if it’s too much of a middle man approach. I think this idea could be constructed more efficiently (as far as resolution and Light loss is concerned) if the Plen-optics were built into the camera body and use multiple C-Mos sensors to capture the image. It would be an expensive camera, but man….

  • Did they have a custom built camera for interpreting the light field data? Or was it being output as a RAW file and they were able to translate the data from that?

  • Even at the developmental stage, this is pretty impressive stuff. In particular, the ability (on a limited scale) to navigate around the subject in three dimensional space *after* the image has been captured reminds me of Deckard’s look-around-the-corner technology in Blade Runner. :D

    Also the capacity to extend or retract the planes of action in the image provides an ‘effect’ equivalent to something which – outside the animal kingdom – has largely been impossible till now: a flexible lens surface. In the future, with this technology perfected, will we only need one lens? With this device, one could shoot extreme telephoto from a distance and then extend the p.o.a. to render the image like a 50mm or wide angle capture, which would be a mind-blowing capability, if realized.

    Finally, away from filmmaking, there’s been much talk about plenoptics as a possible game-changer for astronomical photography: imagine being able to separate out stars from one another in an image?

    Anyway, I hope some serious money is thrown behind this technology.

  • good to know i am not the only one who feel stupid after watching this video ;-O

LEAVE A COMMENT