OpenAI’s Insanely Powerful Text-to-Video Model ‘Sora’ is Here
A look at Sora, OpenAI’s new text-to-video model that can create highly detailed scenes from simple text instructions.
Whelp, that’s it, folks! In what has been the least surprising announcement ever, OpenAI has revealed their latest text-to-video model, and… well, it’s just about everything we’ve ever dreamed (or feared) about the coming AI video revolution.
Just announced today, OpenAI’s new model is called ‘Sora’ and it promises to be able to create realistic and imaginative scenes from simple text instructions. And when we say scenes, we mean highly detailed scenes with complex camera motion and multiple characters with vibrant emotions.
Let’s check out what’s been shared so far, as well as delve a bit into how this might pretty much blow the doors of generative AI video wide open in the film and video industry.
OpenAI Sora is Here
From announcements shared on X today from the likes of OpenAI and Sam Altman, Sora is officially here. And man does it look good. OpenAI has shared dozens of examples of Sora at work now that all look hyper-realistic and very intricate. And this is just the start.
Sora is capable of creating videos up to 60 seconds long, which is insane and much longer than any of its competitors. The sophistication of the scenes also blows all of the generative video AI models out of the water as well.
Don’t believe us, take a look for yourself.
Sora is Available Today
OpenAI has also announced that Sora is out and available now, although it is currently only usable by red teamers to assess critical areas for harm or risks. OpenAI is also granting access to a select group of visual artists, designers, and filmmakers to give feedback on advancing the model even further.
If you’ve used any of OpenAI’s models in the past, you should be familiar with how advanced their AI is at understanding language, and as a text-based generative AI your text prompts will be a huge part of the generative process.
Sora is capable of generating complex scenes with multiple characters, specific types of motion, and accurate details of both the subjects and backgrounds. As OpenAI puts it, “the model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world.”
What Comes Next?
That’s the big question here. As Sora is live now to select users, it’s only a short time before it’s open to everyone we’d assume. To their credit, you could say, OpenAI does report that they’ll be taking “several important safety steps ahead of making Sora available” and are attempting to take certain precautions to keep this technology from being used to create misleading content.
But who honestly knows at this point?
There’s a lot more to be said and explored about OpenAI’s Sora in the future, but for now, there’s not much any of us can do except to sit tight and watch as this technology rolls out and inevitably changes the film and video (and the world for that matter) completely.
- Can't Find a Real Location? Film on an AI Background Instead ›
- There Is Finally a Solution to AI Plagiarism ›
- How Is President Biden Influencing the Future of AI Video Content? ›
- How the Infinite-Generating AI Version of 'Seinfeld' Got Transphobic ›
- This Film Was Written and Directed by AI—Here’s the How and What You Can Learn ›
- How Will Shutterstock Selling AI-Generated Art Affect You? ›
- The AI Overlords Have Arrived to Boost Your Creative Projects ›
- Sora Open AI Hollywood ›
- OpenAI’s CTO Says “Some Creative Jobs Maybe Will Go Away” | No Film School ›