In many ways, there’s no bigger—or perhaps more important—conversation going on in the film and video world right now than the ongoing one about AI. Since generative AI in particular began to creep into the filmmaking industry just less than a year ago, we’ve seen the technology improve by leaps and bounds, seemingly overnight.

And by the time OpenAI introduced their insanely impressive new text-to-video model Sora last month, it certainly feels like the conversations surrounding AI and its capabilities, practicalities, and ethics, are now in full force.

This is why we’re excited to have the opportunity to sit down and chat with some of the few filmmakers who have had the opportunity to work with Sora and create a short film as part of OpenAI’s “First Impressions” series.

Here’s our conversation with the creative group shy kids, a Toronto-based production company and band that was able to use their specialization in colorful, energetic animation and innovative storytelling to test Sora’s creative filmmaking capabilities—and here are their further first impressions.


No Film School: So shy kids, thanks for taking the time to chat with us at No Film School. To start, could you tell us a bit about yourselves and how y’all got this opportunity to work with OpenAI’s Sora?

Patrick Cederberg: Yeah, so for shy kids we're an independent production company in Toronto. We've been doing our thing for a little over 10 years now. We work a lot in animation, but we also do live-action. We do a lot of stuff in the documentary space. Most of our high-profile projects have been doing graphics for documentaries and docuseries, and we also make music. We've kind of become this kind of creative community working with different artists and creators.

Sidney Leeder: And in terms of how we were introduced to OpenAI, we were asked to create a pop-up installation for the TIFF premiere of a film called Dalíland, which was directed by Mary Harron, and produced by Pressman Film. And it's about the life of Salvador Dali. So for our installation, we incorporated OpenAI’s DALL·E model, which at the time was very new and people were invited to play around with it at the installation and create their surrealist pieces of art.

So that was where we first met the OpenAI team, and the installation went well. And from there, I mean shy kids have always made films about technology, so we shared some of our other work and from there we were invited to join the OpenAI Artist test group. When Sora was ready to be put into practice, we were asked to play around with it as filmmakers and provide feedback to the researchers on what was working well in the creative process and where there could be improvements.

Walter Woodman: And as for our connection to No Film School, we love you guys and we read you all the time. Some people reach out which is cool, and I can tell my parents that Bloomberg reached out, but No Film School is something I read all the time, so I'm really excited to talk to you.

No Film School: That's awesome to hear. Thank you for the compliments. Your short “Air Head” really stood out as one of the more authentic, and indie-feeling films featured in Sora’s “First Impressions” series, so wanted to really hear your thoughts on it from a No Film School-esque perspective.

For a second question though, just would like to know how much work have y’all done in the past with generative AI before heading into this project.

Patrick Cederberg: Yeah, we've always been very tech-forward. Our very first film was one of the first set entirely on a computer screen, and for that, we just wanted to see if it could be done. It was called "Noah" and released back in 2013. But since then we were also very early adopters of working with VR headsets, and we were playing with developing stuff in the VR space.

And as for AI, we were playing with Stable Diffusion a lot in the early days, and finding some of the ways where it could move into generative video and just trying to figure out how to make that do some cool looking stuff.

And even for our last shy kids album, a lot of the music videos were kind of experiments within that space as well. And then from the installation that we worked on with OpenAI, I mean, that was where we saw DALL·E for the first time, and then through that Sora eventually. And it's always very exciting and novel and very inspiring to be playing around with these new tools. There is so much you can do.

Walter Woodman: We've also used stuff like Topaz and other things, so I kind of want to tell people to go through our filmography and see if you can spot some of the things where we’ve tried out AI because we’ve been using it for little nips and tucks here and there of little things for actually quite awhile now.

No Film School: And if you don’t mind telling us, when did you first get your hands on Sora? And what have been your more thorough first impressions of it so far?

Walter Woodman: Well, we all saw the first images that they released around a month ago, and it was so cool. It’s just so interesting, and the ones that stuck out to me were the girl on the train in Japan and some of the drone footage of a dust bowl kind of thing. It all immediately just got me thinking about how these clips that were really just tech demos could be actually used for filmmaking.

As storytellers ourselves, our first instinct is always going to be “How do we break this or how do we use this?” And so when we were sort of tapped on the shoulder to help the researchers with our thoughts we just started to make stuff and that eventually became “Air Head”.

And when we showed them it, they were like, “Whoa, we didn’t think you could have a character like that and it could be so consistent.” And to us, we had just treated it the same way that they use Air Bud in those movies. With the right tricks you can’t tell the difference between one golden retriever and the next, which is similar to how we worked with trying to tell one guy with a yellow balloon head from the next.

But it’s really quite an amazing and expressive tool. However, that being said, it also comes with its own drawbacks and it's still very much in its early days. There's still a lot of work that is obviously going to be done, but for the moment, I think it's a very inspiring tool and it really allowed us to go back into our ideas folder on our computer and dust off the old ideas that were too weird or too ambitious.

Patrick Cederberg: Too ambitious or just didn’t get funding or whatever. But it’s been fun to look back at different characters and ideas and re-examine them to ask “What could that be now?” It’s been really invigorating as a way to remove the gatekeepers and let you really see if your ideas have some juice to them. Or if we were like these ideas truly do stink, and maybe they should stay deleted forever.

We also definitely have a big collection of horrifying visuals that we produced from the project as proof of just how long it takes to get [Sora] to do what you want. And even then there’s still a lot of tweaking to be done to what it gives you. It has its frustrations, sure, but it’s still early days.

Sidney Leeder: We also just loved the metaphor of a man with a balloon for a head because we felt like limitless potential when we were using it, and our heads were kind of expanding just like our main character in "Air Head."

No Film School: Could you tell us a bit about your process for integrating Sora into your usual filmmaking and editing workflows? What did "Air Head" look like from start to finish?

Patrick Cederberg: Yeah, I mean I think as Walter said, it began with digging through ideas and trying to find something that worked, and then us all sitting around and together and running these different ideas through this system and seeing what it would spit out. Some of the stuff we thought would work didn't, and some stuff we thought would be sketchy ended up coming through and giving us some kind of inspiring visuals.

Once we landed on "Air Head" though, we were impressed with this “balloon man” visual of how Sora was able to handle the light through the balloon, and just the physics of it all, there was something about that first generation that we were like, “oh, this is it.” It kind of solved a lot of technical problems for us too and we were able to ask ourselves, “Where do we go from here? Where do we send him? What’s his character profile and what would his life actually look like?”

And then as with anything, we took it to scripting and wrote a quick little script, something that we could put together pretty quickly, and then just started sitting around and working our magic with Sora's prompting, seeing what worked with prompts the same way you do with any generative AI.

It's kind of just a process of trying to master how to speak to it. And in the case of Sora, because it's so early days, there's still a lot of stuff filmically that you have to trick it into doing. So it was about finding those sorts of routes around the system to get it to do the camera moves you want to. Because it doesn't understand a lot of film terms right now.

Walter Woodman: Yeah, and I mean you could go on set today and five different people would have five different definitions for what “pan up” or what “truck” means. So I think it's kind of interesting trying to figure out how to convey these filmmaking intentions to an AI like this.

Patrick Cederberg: And then from there, once we had this large pool of material that fits the beats that we’ve set out in our script, it really was just a lot like documentary editing. It was just a process of starting with all of this source material and then putting it onto a timeline and cutting it together to be a coherent story. In some cases, we were doing generations that were 20 seconds long, but only pulling two to three seconds out of it because either that was just the best parts or the parts where it didn’t look like some freaky, mind-bendy AI world. And from there it just kind of followed the usual post process as we’ve known it for the past 15 years where we did voice-over, sound design, and soundtrack—and since we make our own music we just used one of our own songs.

I will say that a big chunk of it was in VFX as well as we did quite a bit of compositing and color grading. There were a lot of generations from Sora where the balloon wasn’t even yellow, even if we asked it to be yellow. So we’d have to go and be like, this shot works, except for the fact that the balloon is blue. So we’d go in and work our magic to make it yellow so that we could use it, and that was the process that pretty much got us to the endpoint.

No Film School: Awesome, and just to for a follow-up, how long would you say you spent on the prompting and generating phase of working with Sora?

Patrick Cederberg: I mean, in the edit we were still, if we noticed a part where it wasn't quite working or we were like, oh, it'd be funny to drop a beat here, we could just go back and start to play around again. We never really left using Sora throughout the process. It was constantly something we could go back to and help us polish the thing up.

Sidney Leeder: I’d say the film took us about maybe a little over a week from start to finish to create. And in terms of generations, usually, it'll take about 10 to 20 minutes per prompt to see options. Then sometimes those options don't work for you and you have to do it again. And sometimes they're great. So yeah, it all depends.

Walter Woodman: We’re also working on a sequel, and in this next version there’s going to be more live-action elements so it’ll be interesting to see how those kind of work together. For this one, we didn’t do much physical shooting, so it was more akin to an animation workflow, which we do a lot. So, not to pump our own tires too much, we do this kind of work a lot and are quite efficient, so for others just starting out it might take much longer to produce even a short one to two-minute video.

No Film School: To wrap things up, if you could expand upon the first impressions of Sora that you shared in the OpenAI blog post, how would you say that you feel about Sora itself—as well as AI in general? Plus how do you see AI changing the filmmaking landscape overall in the future?

Walter Woodman: Well, obviously we've kind of staked our claim here on being bullish on the positivity, but we do understand the fear and the negative comments. We are on Twitter and we’ve read a lot of the negative comments and we do get it. Although I do think it’s weird to be defending the film industry as it stands today, which I don't think is the most supportive and the most creative industry supporting interesting creative ideas. And there's a multitude of reasons for that.

My hope though is that there are tons of people out there, whether that be people with disabilities who maybe can't go outside and shoot or other people who maybe are in rural Nigeria who want to make their own sequel to Avatar, who can use this technology to keep up with the resources available to everyone else. And a lot of those times those people don't get to make their things. And for me, I'm very interested in what those people have to say, and I'm very interested in what are the stories trapped inside of people.

Sidney Leeder: And as far as we're concerned, we don't plan on using AI for every project. We think that it can be really helpful in specific scenarios and on specific projects, but we're still going to shoot on film. We're still going to use, we're going to have human actors and human dancers, and every project requires its own set of ingredients. And sometimes Sora will be one of those ingredients, but it won't be the only tool that we use. And it's just an additional tool in our toolkit now.

Patrick Cederberg: I personally wish that more people could use Sora right now because I think they would all come to a very quick realization that you need people. It's powerful. It's an incredible feat. It generates some really cool stuff, but even having used it, I'm like, I can't wait to go work with the humans.

But I will say, as we were discussing earlier, that if an executive wants to flatten their company so they make all the money, and go make their own movie with AI right now, I’d pray to see that movie—and see inside the mind of an executive who thinks they can just get AI to do it for them. Because I believe that very quickly we will realize just how important creatives and technicians are. So I’m actually encouraged that Sora will have the ability to democratize the film industry for indie filmmakers. If you are working with really limited budgets and you have something that's very ambitious, that's actually kind of possible now.