Description image

Kinetic Images: Creating Stunning Motion Capture Video Using a DSLR, Kinect, and Projector

Kinect_01

The Kinect isn’t just for learning how to do the “jerk” on Dance Central anymore. It’s quickly becoming a tool that artists working with all sorts of mediums are harnessing for its marker-less motion capture power. NFS sat down with Private School Entertainment’s Andrew Gant to talk about how they used a Kinect, a DSLR camera, a projector, and the RGBDToolkit for some out-of-this-world filmmaking. Continue reading to find out how to access the cinematic capabilities of the Kinect.

Below is a behind-the-scenes featurette on how Gant and his team pulled off projecting motion captured images taken from the Kinect and filming it all to create an incredible music video for Exist Elsewhere’s “Tokyo,” which you can watch below it. After the videos, check out the interview with Gant about how he pulled off getting these images.

NFS: So, how the hell did you do that?
Andrew Gant: It’s easy! Anyone with a little time, a camera, a Kinect, and some passion can do it.

NFS: Can you explain your process? 

AG: The first thing I did was use years of jealousy from watching people do amazing things with the Kinect. I used this anger and sadness to inspire myself to getting to the bottom of figuring out how to make these cool effects. I’ve actually been dreaming of this since 2010. I had found this video by a site called Flight 404. This was the first video I ever saw of a Kinect hack, and it blew my mind:

The major problem was that I did not know how to program, code, or hack anything. The extent of my coding knowledge goes to old MS-DOS commands and html for websites. I tried learning a bit, but I kept hitting too many roadblocks.

NFS: Also, how did learn how to use the Kinect to produce these images?

AG: When I got offered to do the Exist Elsewhere job, I thought about how perfect it would be to use these effects. The song was about escaping reality, and I really wanted to send this band into a digital world. I went back to researching the Kinect, to see if anyone had created an easier way to make these effects. Luckily for me, I found the RGBDToolkit. This enabled me as a filmmaker, to use the Kinect with my camera and computer.

NFS: If a filmmaker wanted to reproduce this, what would they need in terms of equipment/software/camera/projector/ etc.?

AG: Equipment: Depending on your concept, you’re going to want plenty of lighting, grip equipment, and stands. If you’re traveling away from power, you’re going to need generators. But, if you want to go all natural light, I don’t think you will need the following:

Projector: Get something that’s within your budget and has the best reviews. Also, be sure to throw your TV away after the job, because there’s no better way to enjoy entertainment after you turn your entire wall into a screen.


Kinect: For the Kinect, I would definitely own a DSLR. I’ve read that you can technically use a GoPro or even an iPhone with the RGBDToolkit, but I’ve yet to see evidence of that. The Kinect and the camera never actually talk to each other, so as long as you mount the two devices close enough together to match each others field of view, I think you could pretty much use anything.

Also I just recently learned you actually don’t need a camera to use the RGBDToolkit. So, maybe see what it looks like without a camera? Maybe your concept will work without it. I’ve yet to try this, but would be excited to test it for sure sometime.

If you are more curious about what the RGBDToolkit can offer, I encourage you to visit their website. Also, their forum is filled with very helpful people. I myself have asked for help there many times before. Many thanks to them!

Camera: I used a combination of the 5D MK II and the MK III. The MK III did much better in low light situations on set at night.

Software: RGBDToolkit, a good editing program (Final Cut, Premiere, or Avid), and After Effects. I did a fair amount of color correction and effects on my Kinect footage.

Artist: I also couldn’t have done this without Sarah Oh, who is the wonderful artist who created the lightman. She has been doing all of the amazing art for the Exist Elsewhere, and her and I had many conversations on how we could bring the lightman to life. It was really fun talking with her about the poses she could draw, and the story we could give the little man.

Etc: As a filmmaker, I believe it’s absolutely important to do your research, video tutorials and read white papers, etc. on all software that you use. Knowing your programs is the key to efficiency and maximizing your time on a project. Learn your shortcuts, and eat your fruits and vegetables.

alexandra

NFS: How much does something like this cost to produce?

AG: Kinect: $100

Kinect to Camera Mount: ~$100 (prices change or you can build one.) My advice is to buy one if you aren’t tool savvy.

RGBDToolkit: Free (Donations appreciated)

Camera: Depends on what you use

Projector: Again it depends, but a good low-end projector that works in your house would be around $1000 – $3000. My projector was a home cinema projector. I know if you go with high-end projectors, some can be more expensive than a car. You may be able to cover entire buildings with those types! I recommend staying away from those unless you have a big budget.

Time: Lots of this is required. This is priceless. But make sure you take time to do your research on projectors and the RGBDToolkit.

Crew: Depends on your concept and your needs!

Locations: Depends on what you can get away with for your crew size. Choose your locations wisely. Some locations may require permits, some locations don’t. Know your city’s film rules as well. What are you allowed to do without getting a permit etc?

Stage: Your concept may not need a stage to shoot on. For this shoot, we used Apache’s stages in LA because we needed to have a controlled lighting set-up and 12 hours of complete darkness to film in.

NFS: What kinds of issues did you run into while filming?

AG: My first issue was trying to build the mount for the RGBDToolkit. This mount connects the camera to the Kinect. At the time, the gang at RGBDToolkit was not selling mounts. All I had to work with were instructions on how to build one: a long list of screws and scrap metal, which I had to form together to make a mount.

Kinect RigFor hardware savvy people this may or may not be an easy job. For me though, I needed something solid and perfect, especially when you are on set and time is ticking. The worst thing that could happen is that you do the entire job, and your footage doesn’t match up, because the Kinect slid out of position.

After running around hardware stores, asking people to cut my scrap metal into shape, I finally built a mount and it actually worked! The joy of completing the concept, planning, organizing, shooting, editing, and more on this video, did not match up to the gratification I received after completing my RGBDToolkit mount!

This is probably very comedic to some people, because they may have found a solution in 10 minutes, where it took me days to travel between hardware stores searching for all the pieces. I was so happy to finally test the Toolkit when I was finished. I fired up the software and immediately filmed a test. This was just me, being an idiot in front of the camera playing Daft Punk’s “Get Lucky,” which had just leaked onto the internet.

So the mount worked, but it wasn’t robust enough. I also couldn’t change the angle of the Kinect well enough. Of course, days before the shoot, RGBDToolkit began selling the professional ones on the site! I immediately purchased one, and they have worked great for me.

The next issue was the projector. I had to make sure it worked outside at night. So again, I did a test in our parking lot to make sure it would be bright enough. As you can see, it was pretty successful even with the street light. That way I knew it would at least perform well in somewhat lit situations.

Kinect Light Test

NFS: What gear did you use in the music video? (camera, lenses, lights, etc.)

AG: Canon 5D MK II and MK III. All types of canon lenses. Sometimes we even used some tilt shift lenses for the lovey dovey flashback scenes. Lighting varied on each set. If we were on the a controlled set or location, we were able to bring out Kinos and HMIs. When we were running around downtown however, we stuck mainly to light panels and natural light.

NFS: What do you expect out of the new Kinect? What will the new features help filmmakers do?

AG: I think there are 2 major areas here where it may benefit filmmakers.

First off, as a marker-less motion capture device. The Xbox Kinect 360 has already been used in multiple situations to replace high-tech motion capture techniques. I’m not a complete expert in motion capture, but I do think it may be safe to say that the Kinect is definitely a “game changer” in this respect. It is the Canon 5D Mk II of the motion capture industry!

With the new Kinect, the obvious major upgrade is the 1080 resolution. Once you start playing with the old Xbox 360 Kinect, you will notice that it outputs in standard def. All the Kinect footage you see in my video is actually standard def, resized to HD, and then has HD graphics applied over it. I’m most excited about the new resolution upgrade, because now, you will not be repurposing any footage. Another filmmaking plus is the “night vision” it will have. Light should not affect what it sees.

Kinect projection

Secondly, the area that I think this device will excel in is picking up where the Xbox 360 Kinect left off. Many creatives repurposed this device to create new interactive art and experiences. If you are a filmmaker, I challenge you to open your mind for a second, and think into the far future.

Here is a wild example for you, what if your computer came with a Kinect? What if the Kinect was as common as a computer mouse and it came bundled inside every new computer? Everyone who owned a computer, everyone who goes on YouTube, would have a Kinect. Now, lets take a look at what the Kinect, would then offer you as a filmmaker.

The new Kinect can read your heartbeat, has facial recognition, can track force behind movement, and much more. So if these features were now available to you as a filmmaker, how then would you make your music video or short? What would change? It’s a great time to be a filmmaker. Tiny revolutions happen each day with new technology and I believe this may just be the beginning for the Kinect or similar devices ahead of it.

***

Thanks to Andrew Gant for answering our questions!

What do you think? Do you think the Kinect will have more of a role in filmmaking in the future? Have you ever made a “kinetic” film before? Let us know in the comments.

Links:

COMMENT POLICY

We’re all here for the same reason: to better ourselves as writers, directors, cinematographers, producers, photographers... whatever our creative pursuit. Criticism is valuable as long as it is constructive, but personal attacks are grounds for deletion; you don't have to agree with us to learn something. We’re all here to help each other, so thank you for adding to the conversation!

Description image 15 COMMENTS

  • Brilliant article! Stuff like this is why I love NFS!

    • mikael_bellina on 08.30.13 @ 7:12PM

      +1 Very nice article and video explaining. But I’m not sure this effect can not be create with a plugin … Like Trapcode plugin. But yes the kinect has fore sure lot of future in this field !!! Let’s be creative !!

  • One part he left out that you see in the BTS clip: have a REALLY good Grip on set :-)

  • Lots of potential, but the digital effect you create for the video doesn’t look that cool. I don’t think the effect looked that interesting in the “house of cards” video and I hope I don’t see it in every hack video for the next year or two.

  • pacificbeachca on 08.30.13 @ 12:52PM

    Awesome! Oddly enough, this was posted on Engadget at the same time:
    http://www.engadget.com/2013/08/30/nine-inch-nails-kinect-live-production/

  • Harry Pray IV on 09.4.13 @ 10:57PM

    RGBZ is going to replace many different technologies that we use today. Here’s a little blurb I wrote a few months back.
    Regardless of how you feel about Avatar’s (or even Beowulf’s) visual styles, I think they illustrate the interesting abstractions that one can conjure up using motion/depth/performance capture on a virtual stage.
    That said, just like RAW video, these technologies are potentially harmful for the unprepared cinematographer (and everyone on set except the director) mainly because they take away our ability to “bake in” our artistic choices.
    You think you got burned when you shot everything in RAW and the producers/director went in and completely changed the entire look of the film after you spent weeks of coloring it? That was only the tip of the iceberg. The only solution going forward seems to be to lock in your level of control in the contract negotiation phase, but that’s another discussion entirely.
    Nonetheless, there are a few exciting (and equally frightening) benefits that I think we’ll see that not many people expect from this technology.
    Luddites, please ready your barf bags.
    changing camera angle: if the depth is captured alongside a stream of cameras that capture color and luminosity and are arranged orthogonally, we have an infinite ability to alter the angle in post.
    lighting: depth/color/luminosity can be a deadly combination for relighting. Currently, relighting is limited because it requires a huge amount of quasi-manual modeling and/or bullet-time style geometric reconstruction. What happens when the 3D data is part of that stream?

    focus: If the computer knows what depth everything is at, there won’t be as much of a need to create that depth in-camera. This may bring us back to the days of 2/3″ sensors and deep focus shooting if we can later add depth effects in post. OR this tech could actually allow us to more accurately stay in focus on absolutely mammoth sensors. Software could be developed to more faithfully follow points in space if there’s a depth stream that is already being captured…(super Easy Focus/mutant panatape).

    atmospheric effects: point clouds are the secret ingredient for this kind of immersive CG as well. The endless hours spent pulling depth from flat images would be gone.

    keying: if the software already knows where the edges are, we (probably) won’t need to rely as heavily on a green screen. Imagine the lighting that could be accomplished if you didn’t need to pull a clean key from that digital green backdrop? This may even open up a whole new world of using display technologies for lighting/actor immersion.
    I understand that this is directly proportional to the resolution of the captured point cloud, but a combination of both would certainly be more accurate than either separately.

    lens distortion: You would be able to add it or subtract it. Perhaps this may bring about a style where you could shoot high resolution on a fisheye and compensate in post for the spacial/barrel distortion inherent to such optics. You could get all of your coverage from one lens and take… which leads me to the next technology that I see on the horizon:

    • Zzzzz.
      Some people go out and make amazing videos like Andrew Grant.
      Others write overlong, self important, bloated posts and never make anything innovative.

  • Nice article!
    Would be awesome to have more like this in NFS!

  • Pranav Ansal who is the managing director of Ansal API, states:
    “We are also coming up with an incorporated town; Ansal Megapolis is 5km from JP Greens in Greater Noida. With so much to be gained by a good movie experience, it’s easy to see why even a lot of bad movies become blockbusters. It might be a good idea to make a backup of the file and save it to a folder on the desktop before messing around with these files.