If there was one major takeaway from this year’s Adobe MAX conference in Los Angeles, it’s simply this: AI is here. Love it, hate it, it doesn’t matter. AI has been the talk of the town for several years now, and while it’s still taking shape to its true form, it’s undeniable that it’s the major force of the future.

But what does that future actually look like? As part of our coverage of Adobe MAX this year, we sat down to chat with Dave Clark, the co-founder and chief creative officer at the AI studio Promise, to hear what he thinks about this new future and the importance of finding balance between traditional filmmaking techniques and the new age of generative AI.


Disclosure: This interview was done as part of a roundtable discussion with other journalists at Adobe MAX. The questions and answers have been edited for clarity and into a more straightforward format.

Discussion Topic: Tell us a bit about Promise and your background in filmmaking.

Dave Clark: My background is in traditional filmmaking. I started off at Pratt Art School, went into commercial advertising as a creative director, and then made the pivot into directing commercials and brand films because I was always a filmmaker at heart. For many of the jobs I had, I was that creative director on set over the director's shoulder saying, "Actually, let's shoot it from this angle. Let's use this lens." Then I realized, maybe I should just be the director, so I made the pivot. I was living in New York at the time, doing work for Coca-Cola, HP, Intel, and a number of brands like that, and I was always intrigued by how technology was changing different industries.

Post-pandemic, I moved to LA and started pitching Hollywood to be a writer-director. I was in the room pitching one time, and the people I was meeting with said, “I love your script, but I can't visualize it.” That was frustrating. Within a week, I discovered generative AI text-to-image, and I thought, "Wow, I can actually use this to visualize and present my creative vision so it’s clear." I think that was the first moment when I realized the power of generative tools for creativity, and how they help people visualize your pitch in a way they can understand.

One of my co-founders, George Strompolos, saw a video that I did very early on. It was a spec spot for Adidas that I had put out, and it went mega viral on Reddit and LinkedIn. I remember him commenting, “I think there's a business here.” Fast forward maybe a year of us talking, and I was still making a name for myself in the filmmaking and AI space. We decided that we should start a studio built from the ground up that's built around these new GenAI tools. George brought in Jamie Byrne, who he knew from his YouTube days, and the company was formed.

It was a great combination of backgrounds. I had the creative background. Jamie - who managed creators for years - understood scaling an entertainment and tech company, and George had previously started Fullscreen, which was a pioneering company in the creator economy and sold to Warner Media in 2019. To all of us, generative AI was that next technology that was going to allow a new creator economy to happen, and empower traditional creatives from Hollywood as well. With all of our backgrounds, we felt we could form a new, innovative company that could produce high-quality films, series, and new forms of storytelling.

Dave Clark

Credit: Dave Clark

Discussion Topic: What kind of generative AI tools were you using at the start of your AI filmmaking journey, and what tools are you using now?

Dave Clark: We want to make sure that we're doing partnerships that are right for the type of work we want to do, but also we want to produce films that can be distributed theatrically or with streamers, and get into proper distribution channels. To do this, you have to be compliant.

You have to create content that can actually work for the different network partners. So for us, we have a partnership with Google (they're a strategic investor) and then we're working with Adobe hand in hand on using the Firefly model, which was great for the most recent production that I'm premiering here, “My Friend Zeph,” because it had all the models under one roof. Usually, with generative AI, some models do some things better than others, so to have the option to mix models together to get certain shots is great. My film was a hybrid film where we had live action sequences with actors on set and on a blue screen. Having the ability to change out the backgrounds, having multiple models to work with inside of AdobeFirefly was key.

We did things like flashback imagery of the lead actor as a child, where we used tools like Nano Banana inside Firefly to de-age her with her permission, based on her childhood photos. And we were able to animate her and our robot, Zeph, in VEO. Adobe gave us flexibility to mix and match the models based on the creative need.

Discussion Topic: How do you see AI filmmaking evolving over the next few years?

Dave Clark: We're in pre-production on an animated feature film that will use generative AI technology, traditional visual effects, and 3D tools. We're creating custom workflows and Python code, for instance, to make AI tools work with certain 3D products. And obviously, Firefly will be a big help because of the available models. But for me, with this film that we did here at Adobe Max, we were able to show how you get something that's AI-generated to the same or at least close to the same type of color space as the live action scenes. So we're pioneering what that pipeline looks like. How do you go from eight 8-bit to 16-bit color space with this type of stuff?

That's important if you want something to be on a big screen or to pass QC at a streamer like Netflix or Amazon. So it was very important for us to get the generative AI backgrounds to match the same quality as the 3D assets so that they can be edited with live-action assets. We’re very happy with how it turned out.

Discussion Topic: For the next generation of filmmakers, what advice would you give them for deciding between traditional filmmaking techniques versus using new generative ones?

Dave Clark: Well, AI isn't one size fits all. I talk to a lot of my filmmaker friends who are from traditional filmmaking. Many are not yet into AI for production, but they are interested in how you can pre-visualize an idea. And I'm starting to get some of them to adopt the pre-visualization of their ideas using AI tools because it's helping them in rooms when they're trying to pitch and help visualize what the movie could feel and look like. So I tell my friends who are indie filmmakers, “I'm not saying make AI movies, but make it part of your pipeline, because you can use it in previs, and you could probably find it helpful in post. You can still shoot your movie the way you want to shoot it.”

I say, “Use tools like AI to expand the world. You can now do bigger flashback moments, or if it's a horror film, you can do really cool creature work using generative AI. And it doesn't take away from what you love to do, which is filming on set and on location.” So I think there are ways to mix it into all parts of production.

It's not all one size fits all. I still love shooting on set, and directing actors is my favorite thing to do. That will never go away, no matter how great the tools get. I think when people understand there's a balance and a choice, and it's not like it has to replace everything, I think they become more comfortable.

Discussion Topic: How do you work with clients and educate them on what you offer and what generative AI is capable of these days?

Dave Clark: Promise is building a development slate of original IP for films and TV series. We want to produce our own projects and partner up with other studios on co-productions. These projects will be distributed in theaters and on streaming platforms.

Then we have a division called Curious Refuge, which is one of the largest AI filmmaking schools in the world, training emerging artists and professionals in over 170 countries. Caleb and Shelby Ward started the school and have been super successful in building a brand for people to learn AI filmmaking. They teach editing, they talk about storytelling, and they really focus on the technical and the artistic side of using these tools. They’re also partnering with the studios, big ad agencies, and marketing companies to help them train up as well.

We also have The Generation Company, which is basically our AI visual effects division. We want to be able to partner with studios and creative partners who have big grand visions, but maybe they don't have a certain budget for certain visual effects type shots. So they are now seeing AI as an option. One of our offerings is also the GenCo Crew, which is our AI unit that can embed into a production at any phase. They could work with the production designer, the post-production team, the DP, etc. We hope to help give artists the creative freedom to dream bigger.

Discussion Topic: How do you handle discussions about budgets with AI projects?

Dave Clark: I don’t look at GenAI as being simply about cutting budget, but actually saving time, which I think is a real way to look at AI, because let's say a movie like a big sci-fi epic takes six months to film. Maybe if you use AI, you're able to complete it in three months instead of six months. Obviously, there's money savings there, but it frees up time, allows for more iteration, and allows you to spend more time on the creative and in the post side of things, because anyone who's dealt with traditional visual effects knows how long it takes. If you're able to save a little bit of time, that same artist, same crew, same actor can then move on to another production.

So I think for me, I try to look at it as a way to save time, be more efficient, and really free up people. I'm a family man. I have three kids. I told my wife one of the greatest things about AI is that I don't have to work 18-hour days anymore. When I leave the set, I like to look at dailies and in post-production work with the VFX artists, and that's very time-consuming. If I'm able to see things quicker and sooner, it allows everyone to go home at normal hours, which I think no one really thinks about as a benefit, but that's really a huge benefit to the industry.

Michelle Slavich: I would add that if and when there is money saved, hopefully that money gets put back into more production, and as a result, more things get made. Getting a greenlight, particularly on original, unproven IP is tough. The studios, the creatives, all want to make more, not less, so if AI can catalyze more production, that's a win for everyone across the industry.

Dave Clark: And that leads to more creative voices that can emerge. People often say, “Oh, it's the same movie again. I don't want to see it.” If there’s more money and opportunity in the system to make more projects, then studios can take more risks on IP, more bets on new voices, and more scripts can get realized.

Discussion Topic: Are there any common misconceptions about AI filmmaking that you'd like to dispel?

Dave Clark: Yeah. Using AI at the professional level is very hard. It's not easy. And I think people assume, because they see a lot of press and social media, that it's as simple as hitting one button and you get a movie. To make a movie is hard with this technology; you have to really bring filmmaking expertise into the room because audiences aren’t going to settle for less. We're used to great movies, we're used to great visual effects. So if we're going to introduce a new tool, it has to be at least as great or, if not greater in my opinion, or at least be a help and additive to the process and not take anything away. It's actually a lot harder, especially with long form, because of character consistency issues. With generative AI, you often get generative drift. So maintaining consistency across a project is a real challenge.

And you also have to work within new production pipelines. Shout out to Adobe Firefly because with something like Boards, you're able to put 20 images on one board and animate them all, and get different versions. It's the same idea as working on set and getting different camera takes, but you're working through it with a machine. But at the end of the day, it's still the artist's vision. You're still directing the visuals. It's just a different way of directing visuals.

Disclosure: No Film School was hosted by Adobe at Adobe MAX. Adobe is also a No Film School sponsor, and their directly sponsored posts are marked as Sponsored.