Making its debut on May 2nd, the latest single for music artist Washed Out has something a bit more going for its silky chillwave beats. The music video for “Hardest Part” was created by filmmaker and multi-disciplinary artist Paul Trillo and is the world’s first officially commissioned music video project to make use of Sora—OpenAI’s not-yet publicly released new generative AI video tool.

We sat down to chat with Trillo, who might be a familiar name to those following the AI video revolution as one of the few creators hand-selected by OpenAI to test Sora with a “First Impressions” project, to chat about his new music video, his prompt-based processes, and his thoughts about the future of AI as a creative booster.


Editor's note: The following conversation has been edited for length and clarity.

No Film School: Thanks for chatting with us, Paul. Before we dive into your process for this new music video from Washed Out and this AI revolution that we’re all a part of, could you share a bit about your background and history as a filmmaker and artist?

Paul Trillo: Well, going in it was never intended to be a revolution, but it has become one, and I guess I've kind of ended up on the tip of the spear for a lot of things for better or worse. I think part of that comes from a lot of my work has always experimented with technique, whether it's camera technologies or post-production techniques. It's kind of been genre agnostic, format agnostic.

I've done everything from Super Bowl comedy spots to dance films, to music videos, to video art installations, which are the ‘Notes to my Future Self’ as a part of a museum piece that's in Madrid right now. I've done full narrative stuff. I've done fully abstract stuff. So it's always been about me just trying to get out of my comfort zone and try things that I haven't tried before and constantly stayed curious that way.

So yeah, it's led to a lot of different projects and other projects featured in No Film School. I did a 10-minute single-take drone film. I did a piece using the first mobile bullet time rig using smartphones. And yeah, it's always just been about how whatever technique, or technology opens up a story or a visual concept that we haven't seen before. And so I've always kind of leaned into the technology aspect to discover new kinds of visuals and trying to go against this challenge, this idea that everything's been done before.

Paul Trillo - Director - ArtClass | LinkedIn

NFS: For the readers here who might have first seen your name as one of the creatives who got to preview OpenAI’s Sora and share their first impressions, we were curious if you’d share a bit more in-depth about what you thought of Sora when you first got the chance to experiment with it.

Paul Trillo: Yeah, it was a little overwhelming at first and I remember thinking “where do I even begin with this thing?” My immediate instincts were to try to break it, which is what I try to do most of these post effects and camera things. I think OpenAI was very curious to learn from our process and not tell us too much about how to use the tool. Initially I kind of found that it had an almost like video game aesthetic. This 1990s kind of 3D animation slash stock video look.

I felt like I couldn’t have aesthetic ownership of any of this, which is always the challenge with these AI things. Trying to retain your voice or your fingerprints in the process when you're intrinsically limited to whatever the hell they trained this thing on.

I wanted to just see if I could get it out of this video game look and if I could make it into something more tangible. I also wanted to try to make it as dynamic and kinetic as possible. And, frankly, I had been getting a bit bored with a lot of the AI work that's been coming out where it's just essentially fancy slideshows. They're like PowerPoint presentations that are disguised as short films and the camera and people barely move.

And even when there’s movement, it doesn't really last beyond a few frames before it starts to fall apart. And so my instinct was like, all right, can I make this really chaotic? How fast can I get this camera moving and what terms can I even use to get through to it. It was a total guessing game. One of the first tests I did was just like a 15-second clip with no edits at all. Just to see if it could do a raw output with these whip pans as if it was continuously zooming through different eras of time.

And when I did that, I was like, oh my God, this thing is way more powerful than I think they let on to believe—specifically from a more experimental film aspect versus which I'm more interested in. Once I cracked that and saw some of the weird film effects that were going on, I was like, okay, I can do a lot more with this as a tool. And from there it was a constant investigation of what else it could do.

Paul Trillo

NFS: Moving on to this new Washed Out music video, which very well may be the world’s first (at least officially commissioned) music video to use Sora, could you share a bit about how this project came to you?

Paul Trillo: Yeah, the timing aligned pretty nicely. It was pretty serendipitous how the delivery of this video is lining up with OpenAI giving me the opportunity to use it for a music video. Originally Ernest from Washed Out reached out to me at the end of January and we were discussing different ideas. I'm constantly doing too many things all at once, and so I was getting a little nervous about when am I actually going to be able to have time to shoot this thing. But when I got a note from my contacts at OpenAI with the go-ahead it all fell into place.

So I saw this as an opportunity to really, I don't know, do something crazy that I could not have done in the timeline and the budget. And it actually played well to how I was finding that Sora was able to blend environments in these surreal ways. And so I chose to lean into this idea of conjuring these AI images almost like you're conjuring some sort of false memories.

NFS: Could you talk about using Sora and what kind of render times you were dealing with? Also how much of the project did you have to prompt to get what we ultimately see in the music video?

Paul Trillo: I think for this project I generated almost 700 clips to make this video, and I think I used about 55 or 56 of them. So I calculated it was about 10% of this stuff that actually made it into the final video.

As far as time goes, generations with Sora can take anywhere from 15 minutes to an hour depending on how much you're using it, how long your clips are, and how big of a resolution. So there's a lot of variation in terms of render time. But I mean, you could imagine that with 700 clips it's going to take more than a day of rendering. It's multiple days. I worked on it for I guess about six weeks or something.

I could have made another video in six weeks, but I put a lot of that time into the editing process, so I spent extra days editing this thing because you have this fluid back and forth between the writing and ideation of the piece to the actual final creation of the piece.

The more you can pry out ideas, the more you just end up filling your time. The more places you save time elsewhere, you end up spending it somewhere else. We all have technologies that save us time, all these things that our smartphone does that save us time, but yet we still find ourselves really busy. So it's not like technology is giving us back hours within our day that are just totally free. We end up just filling our free time. And I think the same goes for the creative process as well.

Paul Trillo

NFS: Interesting, as the majority of us haven’t been able to try Sora ourselves yet, I’m curious what the actual process was like, you know, when working with the prompts for the first time and trying to bring a project like this to life.

For me it originally started with these rolling hills that I felt were going to be a hard thing to find in reality. I wondered how Sora would handle rolling green, surreal landscapes, and it did it better than I think any location we could have found. And while we were waiting to hear back from OpenAI to get everything approved, I was experimenting with some early tests when I eventually discovered that it can do these kinds of high-speed infinite zoom, infinite dolly move things, which is something I've done in my work prior to AI.

So it's kind of just a go-to thing I ended up doing and wanted to see if there were some of these techniques that I had done in other places with other tools. Can I do that here? And thought that could be a cool device to tell a story with.

I actually had this idea of this infinite zoom through time following a young couple across four decades. I had this idea 10 years ago, but just was never able to practically figure that out for a music video budget. And so I kind of just shelved that idea, but I was like, “oh, that could be interesting here.” The song is about letting someone go, moving on, and knowing that you're going to have to live your life without someone. And so I wanted to honor the lyrics and the reason the song was made with the story.

From there it was honestly liberating using Sora as I was able to just throw any idea, even if it's a bad idea, at it just to explore. Because sometimes you self-edit, you compromise or you filter certain ideas in the creative process. But with Sora, it's kind of like there's no judgment there. You're just like testing to see whether an idea works without even having to pitch it to anyone.

NFS: It does seem like experimentation is one way in which AI could most immediately be helpful for filmmakers.

Paul Trillo: Yeah, the experimentation and that ability to try things out is very unique to the tool and I think it is served best when it's being used in its most experimental form.

Paul Trillo

NFS: Exploring that a bit further, if you had to give advice to any filmmakers or artists looking to use Sora, or AI in general, in their projects, what advice would that be?

Paul Trillo: Any advice I could give on the technical side of using the tool is still in flux. But I think it's really great for trying an idea that no one else is going to let you try, so no need to try to get a green light or try to get a budget for something. You can see whether something's working or not without having to get approval from someone else. And that can lead you to try things that you wouldn't have normally explored.

But I also think there’s something to knowing where to draw the line. You don’t want to become a hundred percent reliant on using AI all the time, or using it as a crutch, because you will find that it doesn't do everything and there are limitations to it. It's still very strange. If your idea is somewhat experimental or conceptually makes sense to use AI for a higher meaning, I'd say it's justified.

But if you're struggling with character consistency, or if you're like, “oh, it doesn't do dialogue or whatever,” then just go out and shoot it with a camera. If you're hitting the wall of the limitations of AI, that means you're maybe not using it in the right way. It's great for trying to hallucinate and have it do things to help you find the happy accidents that come out of it as the glitches, which I think can be really beautiful and interesting as those are things that you couldn't shoot with a real camera.

So for those things, finding those kinds of beautiful errors, that's a great use of this, but I am not excited to see this replace the entire filmmaking process. I think that's honestly a bit boring and not really using the tool to its advantage.