This New Technology Lets You 'Touch' Objects Onscreen—and Could Change CGI Forever

Meet the future of CGI and augmented reality.

Since the 1960s, the best way to simulate an object's motion in space has been 3D modeling. An integral element of the CGI process, it's a cumbersome, expensive endeavor, one that requires hundreds of hours of manpower, state-of-the-art technology, and ever-changing algorithms to produce even the shortest sequences. 

Interactive Dynamic Video, a new technology developed by MIT's Computer Science and Artificial Intelligence Lab (CSAIL), could significantly ameliorate the process. By calibrating the physical behavior of objects—analyzing vibrations in different frequencies in space—IDV can predict how objects will move in new situations, an unprecedented achievement in motion graphics.

It could reduce the cost of the CGI process by eliminating the need for green screens.

Using the technology, you can reach in and "touch" objects in videos, manipulating the environment according to your movements. This functionality has huge implications for virtual reality and augmented reality. An AR game such as Pokemon Go can insert virtual characters into real-world environments, but the two worlds remain distinct. IDV, however, permits interactivity between the virtual and the real, such as the ability for a Pokemon to bounce off the leaves of a real tree, as shown in the above video.

In the near future, IDV will likely prove even more beneficial to the film industry. To create models of virtual objects or environments in the frame, filmmakers today must implement the time-consuming and potentially expensive green screen method. IDV could supersede green screens by enabling a cinematographer to make minor edits in-camera—such as masking, matting, and shading—to achieve a similar effect in a fraction of the cost and time.      

Your Comment

6 Comments

I won't hold my breath.

August 23, 2016 at 1:00PM

0
Reply

very interesting. This definitely will make CGI easier to work with for many VFX artists when composing their new assets into the actual film at least!!

August 23, 2016 at 6:01PM

3
Reply
kicap
207

really cool! It will be nice when this technology becomes available for us low budget filmmakers.

August 23, 2016 at 7:29PM, Edited August 23, 7:31PM

1
Reply
avatar
Anton Doiron
Creator/Filmmaker
1052

Interesting technology but not really going to help with green screen extraction. I can see it being useful for scanning an object into 3d with dynamics using multiple cameras. Add in some shader extraction technology (when / if that exists) and some light detection algorithms (which do) and you have an asset. That can be added to a scene realistically.

http://www.jorg3.com/sites/joomla/media/files/CG2010_light_detection.pdf

August 24, 2016 at 5:18PM

8
Reply
Arnold Ronald Donald
VFX Supervisor
76

Porn

August 24, 2016 at 10:50PM

3
Reply

I gotta say, pretty disappointing compared to the grand title of the article. Interesting for sure, but it's a looong way from Production ready; or even production applicable at all. And how would this replace green screen? You can push/pull an object a little bit, but you aren't going to actually knock that tree over they showed at the end there. You still have to be able to replace the background. Anyway- I have a feeling this will end up less about filmmaking and more about games or other AR.

August 25, 2016 at 9:02PM

0
Reply
avatar
Douglas Bowker
Animation, Video, Motion-Graphics
539