Paul Trillo is no stranger to AI. You might have already seen his work, where he uses generated images, AI-supported rotoscoping, and live-action plates to generate some amazing artistic results.
But AI is speeding ahead in leaps and bounds. Runway Gen-2 is now being tested in a private beta, and Trillo was quick to make use of these new tools when he got access. Here’s his latest film that is completely AI-generated using this new tech:
Putting Together the Pieces
While Instagram is filled with unique AI-generated compositions, most of them are just that: prompt-generated content. For Trillo, the approach to using AI isn’t a blunt hammer but a super sharp scalpel or a paintbrush (if you want to keep with the artistic theme).
“Gen-2 is the strangest experience I've had creating video,” Trillo said. “And I've created a lot of strange stuff. It's a bit like shaking a magic eight ball until you get the answer you're looking for.”
Trillo’s previously made a short film for GoFundMe using AI. Then, his approach was the same as now but with much more power behind his generated assets.
“The prompting structure works entirely differently, too,” Trillo said. “This [last film] was created by generated images, essentially storyboards, in the open-source version of Stable Diffusion called Automatic1111. I would write a bit of the story and ideas for visuals, generate some images, and then feed that text and imagery into Gen-2.”
Here are some of the stills that Trillo used as storyboard frames. It still blows my mind that none of this is real, but in fact, completely generated by an AI and simple text prompts.
Is the uncanny valley disapearing?Credit: Paul Trillo
“[Runway] Gen 2 breaks down all the elements that are in the reference image and then cross-references it with the text prompt and does its best to amalgamate and reconstruct it into a moving 4-second clip,” Trillo continued to explain. “The results vary, so you have to weed through a lot and continually refine the prompting until you get what you want. You can also use negative prompting to remove unwanted aspects like anatomically incorrect people or blurry faces.”
This whole process took quite a bit of time too. Think of stop motion, but each of your frames generates from reference material that was created using AI-generated images.
“I generated over 400 short clips and edited it down into this short,” Trillo said. “Because you have infinite options and things could always be better, it becomes a bit of a rabbit hole, and you have to pull yourself out at a certain point, or you'll tinker forever.”
AI-Generated reference imageCredit: Paul Trillo
The Goal Was About Finding Limitations
Trillo is a part filmmaker, part tinkerer. For him, creating this project wasn’t just about making a short but about testing how far AI has come and how far it still has to go.
“What I was looking to do with this piece is to take advantage of the aesthetic limitations of the AI,” Trillo explained. “The surreal and often uncanny nature of Gen-2 would be difficult to recreate with cameras or traditional animation. These otherworldly attributes lend themselves to dreams, memories, and alternate realities. I think Gen-2 is the closest to a snapshot of a dream that I have ever seen.”
We happen to agree. It’s eerie how realistic some of the most recent AI generated has become. While some may be scared of the implications this could have, Trillo has a different approach.
'Thank You For Not Answering'Credit: Paul Trillo
“This idea originally spawned out of a previous idea I had for a short before all these AI tools. It felt appropriate to use Gen-2 to reconstruct someone's memory of the world they lived in because that's sort of what the AI is doing,” Trillo said. “I'm more interested in exploring what makes these tools different than trying to recreate a Hollywood blockbuster on the cheap.”
But what do you think? Is Trillo’s approach to using AI making you want to use it on your next project? Are you worried about these new tools?
Let us know in the comments!