Description image

DSLR + Kinect = Add Depth to Your Footage, Take a Peek into the Future

05.2.12 @ 7:36PM Tags : , , , ,

What do you get when you map footage from your DSLR onto volumetric data from a Kinect?  How about strange, exciting, sweetness!  A very enterprising fellow by the name of James George has developed the RGBDToolkit, a workflow that allows you to marry the two tools with some very intriguing results.  You shoot through a DSLR attached to a Kinect, and after calibrating both devices to a checkerboard, you can start creating some incredible imagery.  But don’t take my word for it, check these videos out:

Here’s an overview of the project:

And some more sample footage taken at a recent workshop:

Now, if you’re the average DSLR shooter, you might think: yeh, that’s cool, but what does it do for me?  Well, beyond creating some very cool experimental imagery by itself, if further refined, this approach could be used for a wide variety of uses.  Imagine having a multi-DSLR/Kinect setup where you could simultaneously shoot an actor from a variety of angles.  Perhaps by synching them you could use that information to place the image of the actor in a 3D CG environment, and be able to create new virtual camera angles.  Or, motion capture style, map alternate skins onto the figures being taped.  Or, perhaps map out a room, and recreate it virtually.  Many of those things are being done already, but for many millions of dollars more, and here we have two off-the-shelf tools along with free open-source software that let us start dipping our toes into that world!

Who knows?  Perhaps volumetric scanning will become a common feature on future cameras, allowing for 3D projections, holograms and other assorted weirdness.

Just to be clear, I don’t know if you can do the things I just suggested without some major work, what I’m saying is this is an interesting step in that direction (maybe some of you more tech minded are already thinking about how you could pull it off).  For those who just want to try this out right now, as is, things might look a tad “crude” if you’re looking for a photorealistic representation, but if you are willing to explore a stylized look (think Tron or a A Scanner Darkly) you could create some very cool stuff.  You might even decide to go for extreme stylization by perhaps creating some script/filter that dynamically messes with the dimension and color channels (i.e separate an image into layers based on colors).  I can see myself spending many an afternoon playing around with this.

Ready to try it out?  You can download the software along with instructions for a DSLR/Kinect mount at the main RGBD Toolkit site.  For more sample footage, check out their Tumblr page.

If you get a chance to play with this, let us know what you think in the comments below!  Can you see other uses for this?

[**Correction**: Attribution error - James George is an artist fellow at the STUDIO for Creative Inquiry, and received funding support from the Playmodes audiovisual collective.  If you're in the NYC area, go check out an upcoming exhibition that shows off a 17-minute documentary made using the toolkit!]

[via The Verge]


We’re all here for the same reason: to better ourselves as writers, directors, cinematographers, producers, photographers... whatever our creative pursuit. Criticism is valuable as long as it is constructive, but personal attacks are grounds for deletion; you don't have to agree with us to learn something. We’re all here to help each other, so thank you for adding to the conversation!

Description image 22 COMMENTS

  • Going to try this out when I get a chance.

  • i’d be happy if vulumetric scanning meant that you could film a scene with a wide angle lens, and be able to artificially apply DOF in post – since the raw data will actually know the depth of each pixel relative to the others. combine that with a high-res sensor, and you could potentially shoot a scene just once, at a wide distance, and then use various crops of that one take (with applied artificial DOF) to tell a story.

    • Applying DoF in post could also be done through light field technology. Now that Lytro brought the tech to still photography, I can only imagine how disruptive it would be applied to film. R.I.P., 1st AC!

      • fuc dat; just use the depth info for:

        1) hold out mattes
        2) 3d collisions
        3) post stereo conversion
        5) match moving
        6) projection mapping
        7) whatever you want

  • Lachlan Huddy on 05.2.12 @ 10:55PM

    ^Agreed. If syncing a number of these setups to map a complete figure, as EM suggests in the article, is or will be possible, this will be massive for the kinds of hyper-stylized films I’ve always wanted to try making. Anyone know or can theorize if that’s a possibility?

  • Yeah, this could do what DSLR did for filmmaking, as in make motion capture a reality for many indies. A nice look into something we’ll probably start seeing a lot more use of in the future.

  • Youtube channel Corridor Digital’s new video actually used a Kinectic as motion capture. It’s not perfect, obviously, but it’s a pretty neat idea.

    The video:

    The behind the scenes:

  • is it just me or did they release this software as open source but with no instructions on how to use it? plus none of the .exes start up on windows (missing DLLs)

    • yeah the windows stuff is pretty broken at the moment, but the os x stuff is working. we build the tools as we go, publishing constantly. I feel that if you are really into it you can get in touch and we work together to get it going for your case. If I waited until it was “ready” it would never be published.

      At the moment we’re jamming on a project but after it’s done I’ll give the windows version some love and get a solid release out. Along with some tutorial videos ;) Stay tuned, it’s a very very young work in progress.

  • fyi opensource = unsupported crap

    also the above video sucks, please dont post crap without proper warning.

  • Man, people are getting ruder on here by the minute. Real shame.

    • hazem abdulrab on 05.9.12 @ 9:46PM

      i kno wtf is wrong with these people..seriously this is new and exciting experiment

  • I work for Electronic Arts, we have a rig that uses a Kinect and 16 webcams and use it for crude mocap as sort of a parlor trick for when we do community events. Even with DSLRs I don’t think this sort of thing is production ready. Maybe the beefed up Kinect 2 will help it. Right now this stuff is all experimental, but there are some pretty fantastically hilarious bugs that happen when people wear hats or glasses or really anything that might block a camera’s full angle

  • Weird how much it looks like the holograms in Minority Report.

  • With open source (free) sometimes you get way more than you pay for I.E. Blender. Other times you get what you pay for, I.E. nothing useful. Most cases it,s somewhere in between. I went the paid for route NewTeks Lightwave 11.6 and their Nevron Motion plugin with Kinect for Windows. Much more costly but I have been into LightWave 3d for years so just an upgrade for me. It’s a complete solution for realtime markerless mocap and FBX and BVH retargetting. Like they say you get what you pay for if your lucky more than you pay for Blender.

  • “Imagine having a multi-DSLR/Kinect setup where you could simultaneously shoot an actor from a variety of angles. Perhaps by synching them you could use that information to place the image of the actor in a 3D CG environment…”

    Halfway there:
    [vimeo 92876080 w=500 h=281] NoOddjob.obj from Steve Cutler on Vimeo.