While many will continue to debate the merits (and market value) of AI technology in many other forums and threads, the big players in AI continue to make some pretty significant advancements in how their AI models can create video content.

The latest news is a follow-up to our report that Runway is releasing their Gen-3 Alpha model which has promised to rival OpenAI’s Sora for what could be the leading AI video model on the market. (That is, once OpenAI actually releases Sora to the general public.)

Let’s take a look at some examples of what Runway’s Gen-3 Alpha looks like, plus check out how Runway’s image-to-video technology can work with either first or last frame prompts—which is a unique twist for generative AI video that can (at the very least) yield some fun and interesting results.


Runway Gen-3 Alpha Image to Video

Announced last week as being officially released, Runway’s Gen-3 Alpha does indeed promise to be one of the best AI video models in the world. And with OpenAI’s slow-rolling of Sora’s release, Gen-3 Alpha can brag that it’s here and ready for users now—which is of course a plus.

Gen-3 Alpha works with both text prompts and image prompts, which we can see some examples of in this thread shared by Runway below.

First or Last Frame Video Generation

While not M. Night Shyamalan levels perhaps, the unique twist here for Runway Alpha Gen-3 is that its image-to-video prompt technology allows creators to use an image as either the first or last frame of their video generation.

Runway reports that this feature can be used either on its own or combined with a text prompt for further guidance. A neat wrinkle that they’ve just shared which you can also check out in its own thread below.

If you’re curious to check it out, you can learn more about Runway Gen-3 Alpha here.