The “ChatGPT for Video Editing” Tool Eddie AI Releases Automatic Multi-Cam Editing Feature
Let’s look at how AI can help you with your multi-cam rough cuts.
As we covered when it was first announced, Eddie AI is a new AI-powered video editing assistant app that serves as a de facto “ChatGPT for video editing” tool. Following up on its chat-based video editing main function, Eddie AI has added a multi-camera editing feature that promises to help video editors quickly rough cut their multi-cam projects into more manageable (and editable) portions.
This new AI technology can help you choose the optimal camera angle for your on-camera subjects who are speaking at any given moment. The new feature has been demonstrated at Apple’s Final Cut Pro Summit in Cupertino, California, and showcased its latest prompt-to-edit capabilities.
Let’s look at this new functionality and explore how it could be an option for your multi-cam video projects.
Eddie AI Multi-Cam Editing
For filmmakers, multicam footage has always been a post-production time suck—a dance between syncing feeds and timing cuts. However, that could be changed for small teams looking to handle multiple camera setups with this new Eddie AI release.
Picture your typical interview setup: three cameras capturing a CEO being interviewed. One wide shot and two close-ups, one on the interviewer and one on the interviewee.
Traditionally, turning this into a dynamic edit means hours in the editing bay. With this Eddie AI feature it would instead be as simple as exchanging a few prompts and receiving a rough cut back within minutes with Eddie. This edit will include cuts between cameras automatically chosen based on numerous variables, including who is speaking and when.
How Does Eddie’s AI Multi-Cam Editing Work?
Eddie's AI starts working as soon as you import your videos, which can also include separate audio and up to three camera angles today. Underpinning this release, the team behind Eddie developed one of the world's most advanced speaker identification systems. It analyzes audio and video streams, including motion detection, to identify who is speaking and when and then selects the appropriate camera angle.
A big part of Eddie’s appeal is you can focus on the story by giving Eddie text prompts, similar to chatGPT but for video editing. Users can ask Eddie to identify topics, find the most important soundbites, or create a 5-minute rough cut, and it does the heavy lifting to analyze and create cuts. You can also then view the edits inline within Eddie’s interface. With the AI multi-camera editing launch, those edits from Eddie are cut to the optimal shot at the right moment.
Take Your Rough Cuts into Your NLE
Once satisfied with Eddie's cuts, users can export directly to their NLE, such as Adobe Premiere Pro, DaVinci Resolve, or Final Cut. There, the edit—and the chosen angles—will relink to the source footage, ready for polishing, color, and music.
For small teams and content creators that seek to increase their production values by using multiple cameras and being more efficient, this new feature could be a game-changing time saver.
- Could Panasonic’s New AI Image Recognition Algorithm Change Autofocus Forever? ›
- Everything New You Should Try in Adobe Premiere Pro 2024 ›
- How You Can Use AI to Recreate the Entire Production Experience With Your iPhone ›
- How To Use AI To Match Footage From Different Cameras ›
- Preview the AI-Powered Updates Coming to Final Cut Pro for Mac and iPad 2 ›
- Easily Match Multicam Colors with Atomos AtomX CAST’s New AI-Powered Desktop App ›
- How to Use AI to Edit Soundbites From Numerous Interviews Together | No Film School ›