February 13, 2013

Is the First 'Instant' 3D Modeling and Motion Capture Camera the Future of CG Animation?

In creating computer generated imagery, reference photographs of real-life objects may assist modeling, texturing, and animating a 3D object. In animation, this practice translates into something called motion capture, or 'performance capture' when facial expressions are the focus (see: Avatar). Fixed reference points on an object or surface help artists recreate something virtually, but Microsoft XBox 360's Kinect technology is actually able to recognize shape and motion on its own, turning you into a full-body video game controller in real-time. The new Lynx A Camera looks to take this a step further. Meet the world's first 'point-and-shoot' camera that can model and capture the geometry, texture, and motion of anything you aim it at, right before your eyes.

First off, it's relevant to note the ways in which the Lynx A does in fact differ from the Kinect:

A Kinect is a 3D imaging sensor that provides a raw feed of 3D points. The Lynx A produces detailed meshes, motion files, and 3D panoramas in real-time thanks to the integrated hardware/software experience. The research that makes this possible was conducted over a year and a half by a qualified research team. It's certainly true that many of the hardware components are readily available, but the same could also be said of an Xbox 360. The real magic is in the software!

The project is currently (and already successfully) being Kickstarted, while receiving quite a bit of attention in the tech press as well. Here's the pitch video from the camera's designers at Lynx Laboratories:

It's going to take an endorsement of $1800 to get your hands on the camera via pre-order, but this is an extremely unique piece of video capture technology. If successfully developed and released, I would venture to name the Lynx A as one of the truly 'next gen' technologies in CGI. This camera specifically will take some time to fully iron-out, and the first iteration of this concept may simply be too experimental or low fidelity for serious photographers, filmmakers, or 3D artists. (The optical color camera component only shoots up to 640x480, so the textures are going to be pretty low-res for people used to modern photo quality).

That said, the possibilities are intriguing, and the technology is promising. Much like Lytro's lightfield concept, the point-and-shoot 3D camera will surely mature into something even professionals with the highest standards might consider practically. Below are some interactive examples of the "raw" data the camera can produce:

If nothing else, the Lynx A could provide a super portable all-in-one previsualization or first-pass CGI solution. After all, information it captures can be exported into software such as Blender, Maya, or 3D Studio Max for clean-up or further detailing and tweaking. You may even just use it to give CG work a head start -- perhaps it's a way for the art department to quickly and effectively keep the CGI team up to date, 'beaming' over scans of set designs or props that eventually must be recreated digitally at a higher quality. Similarly, it could produce models and motion-captured animations for use as digital stand-ins or temp visual effects. The exciting thing, I think, will be the ways in which a camera like this could expedite the digital filmmaking process, especially in vfx-heavy productions.

What do you guys see in the future of CGI with this type of technology? How do you see yourselves using this camera to the advantage of your productions or modeling/animating?

Links:

[via The Creators Project]

Your Comment

19 Comments

I do this with my iPhone and 123D Catch by Autodesk, already. Works incredibly well.

February 13, 2013 at 8:43PM, Edited September 4, 11:21AM

0
Reply

I was going to say, why not just use 123 catch?

February 13, 2013 at 8:53PM, Edited September 4, 11:21AM

2
Reply
Jeff Akwante

I don't know - seems like they could have done the same thing with an iPhone/iPad app and not charged so much money... This seems silly to me (Is it just me guys/gals?) And 123 is basically the same thing and I think a better job than what I'm seeing here.

I don't get it.

February 13, 2013 at 10:02PM, Edited September 4, 11:21AM

12
Reply
Jer

For someone like me who hasn't heard of this or 123 catch...i'm thankful :D
Woot!

February 14, 2013 at 5:59PM, Edited September 4, 11:21AM

2
Reply
Will

123 catch isn't real time, it sends the data to Autodesk to be calculated. These guys are looking to make this into a realtime device that doesn't require too many steps. There are those who are willing to pay for that.

February 21, 2013 at 9:08PM, Edited September 4, 11:21AM

0
Reply
Thomas

Using tech that has been around for years and packaging into one tool, not a bad idea, but better tools exist. Lack of detailed capture and requesting from community for feature sets doesn't make for a winning picture of their future. Might be perfect for M&A with a team that can drive results.

February 13, 2013 at 9:01PM, Edited September 4, 11:21AM

0
Reply
ThunderBolt

Maaaad technology, wonder if I should bother continuing learning Maya Modeling. I'm not an Apple dude so not sure about that 123 thang, but this is wicked.

February 13, 2013 at 9:39PM, Edited September 4, 11:21AM

0
Reply
fiftybob

for animation, the topology of the mesh is important, I don´t think that this tech can solve it, not by now. So the act of thinking about the way you model a face, a body, or any organic mesh is still important. It may be great for non organic models for sure.

February 14, 2013 at 12:47AM, Edited September 4, 11:21AM

0
Reply
guto novo

Well, non-organic modelers who only make models out of pieces that exist in real life...

February 14, 2013 at 3:59AM, Edited September 4, 11:21AM

6
Reply
cows

other problem is, tht for you to have a full model, you gonna need a 360 degree view of it, depending on the size it may become a problem. Also if you gonna model some mechanical object with parts that works in kinematic hierarchy, it may also be a problem... but who knows the future of quantum computing in the future? :)

February 14, 2013 at 1:24PM, Edited September 4, 11:21AM

0
Reply
guto novo

Thanks Task Tanner Jeff Akwante for the tip about 123D Catch.

February 13, 2013 at 11:06PM, Edited September 4, 11:21AM

0
Reply
c.d.embrey

I have to admit that I am not terribly impressed by how this looks... It looks about as good as 123d Catch and adds in motion capture, but until there is an automatic pipeline that cleans up the data and makes the scanned objects able to drop into a cgi scene without hours of cleanup, then... meh.

February 14, 2013 at 8:38AM, Edited September 4, 11:21AM

0
Reply
Dovahkiin

Not sure I'm completely sold on this, apart from similar products (software) which already exist.. why not just use an Ipad? Geekiest video ever, everyone is reading off a cue card lol

February 14, 2013 at 9:42AM, Edited September 4, 11:21AM

0
Reply

Is it just me, or is everyone in that video 12 years old?

February 14, 2013 at 11:21AM, Edited September 4, 11:21AM

1
Reply
David S.

This has a lot of uses for a pre vis and even vis stand point. The motion capture looks like it can save you a bit of money. But something tells me this can be handled on the software side as opposed to this hardware solution. Some kind of camera tethered to your laptop. Also the models generated. By 123d capture are mostly useless.

February 14, 2013 at 1:35PM, Edited September 4, 11:21AM

0
Reply
JEF

This is just a Kinect type sensor and a laptop in a ginormous box. The output seems worse than from Kintinuous.

February 14, 2013 at 1:48PM, Edited September 4, 11:21AM

4
Reply
Asdf

sorry, but this is neither new, better nor cheaper.

use 123D Catch + brekel (http://www.brekel.com/)
and you can do all of the stuff above like last year and for free (if you have still-camera + kinect + computer )

February 14, 2013 at 3:41PM, Edited September 4, 11:21AM

0
Reply
Horst

Like any tools you still need someone who has trained in animation, modeling, rigging, texture, lighting and more. There are no tools to replace the artist.

Wayne Lam
graduated from visual and 3d Vancouver Film School.

February 19, 2013 at 3:19PM, Edited September 4, 11:21AM

0
Reply

That all depends on what the needs to be done. The tools can't be figure out why something needs to be done, but can execute commands as needed.

That happens when the toolset gets sophisticated enough, where the artist may just be the storyteller, who's no longer needing other people to help execute the idea.

That's part of why the vfx industry is in the bad shape it's in. The technology has progressed to a point where you don't need to be a Ph.d to use the tools.

The biggest complaint by a lot of professionals is that we're at a race to the bottom, which to them means the tools are getting so easy anyone will say they are an expert.

February 21, 2013 at 9:14PM, Edited September 4, 11:21AM

0
Reply
Thomas