Chances are you've spent your fair share of precious time looking at (or trying to look at) one of those Magic Eye posters. You know what I'm talking about -- those pictures that look like nothing but static until you relax your eyes enough to see the hidden T-Rex or Eiffel Tower pop out in 3D. These random dot autostereograms have been used for over half a century, but it wasn't until recently that director Jared Raab created the first random dot autostereogram music video for the band Young Rival's single "Black is Good". Continue on to see (or try to see) the video and learn how they pulled it off.
It might help to know exactly what a random dot autostereogram is. These images, when viewed correctly, produce the illusion of depth perception -- the ability to see a 3D scene from a 2D image. Not everyone has the ability to see these images (those without two eyes or those with eye impairments), but there are several techniques you can use to bring the scene to life.
There are two versions of Young Rival's music video, each of which requires one of two viewing techniques to be seen. One is a parallel-eye autostereogram, which requires you to "relax" your focus and "look through" the image. (Here's a helpful tutorial.) The other is a cross-eye autostereogram, which is pretty self-explanatory -- you cross your eyes until you see the scene.
Check out each version of the music video below, then continue on to find out how the filmmakers pulled it off.
And if you've tried your best to see the above videos to no avail, this depth-map version shows you what you were missing -- though it's a lot less -- romantic and wonderful.
So, how did Raab and his team approach making this? They used a technique that utilizes a Kinect and RGBD toolkit to produce incredible images -- a similar technique used by Private School Entertainment's music video for Exist Elsewhere’s song “Tokyo”. Here's an in-depth explanation on how it was done:
To make your own autostereogram, one must first create a thing called a "depth map" which is a 2D representation of 3D depth information. We collected real-time depth data of Young Rival performing the song using an X-Box Kinect hooked up to a computer. The computer was running software called RGBD toolkit, designed for capturing the depth information from the Kinect using its built-in infrared system.
Once we had our depth information, we unpacked it into image sequences and edited these sequences as if they were regular video. The only difference in the editing process was that depth was represented by luminosity -- With much trial and error, we then ran the data through an algorithm which took each frame of depth information, converted it into a random dot stereogram image, and repacked it into the final video. Lastly, there was one more colour pass at the end, and voila.
There you have it. Have you ever tried this technique before? If so, can you give more information on how it's done? Share your thoughts/ideas/advice in the comments below.