After Vimeo’s CEO issued a statement the other month about the social video hosting platform promising to not train any AI models on any Vimeo users’ content without their explicit consent, the company is further doubling down on AI trust and transparency with a new update to the platform’s Terms of Service and Community Guidelines.

These new updates will ask creators to label AI-generated (or synthetically generated or other meaningfully manipulated) content as such as a way to prevent users from viewing content that could be mistaken for real people, places, or events.

Let’s check out this update a bit further in-depth, as well as explore what it could mean for the future of AI-generated content across other social video platforms and channels.


Vimeo’s AI-Labeling Push

In a very in-depth blog post on Vimeo’s website, the company goes into more detail about why AI labeling matters for its platform. Vimeo cites a “rise of deepfakes and other AI-generated media” as having made it more difficult for online viewers to tell the difference between what’s real and what’s fake these days.

And, in an effort to stop the spread of misinformation and the harmful consequences of such content, this new AI content labeling push is aimed towards these four goals:

  • Uphold platform integrity: Combat the spread of misinformation, especially deepfakes that can be used to manipulate public opinion.
  • Preserve content authenticity: Maintain trust and integrity within the Vimeo community by reducing the risk of viewers being misled by inauthentic video content.
  • Empower viewers: Provide you with information to understand the content you consume.
  • Support creators: Help creators maintain their audience’s trust by distinguishing their techniques.
\u200bAI Generated label for Vimeo videos

AI Generated label for Vimeo videos

vimeo.com

Using Vimeo’s AI Labels

Starting soon on Vimeo, users will have the ability to voluntarily disclose their use of AI. This label will work for content created using Vimeo’s own AI tools as well as eventually with an automated labeling system that can detect AI-generated content.

Vimeo recommends labeling content that:

  • Portrays a real person saying or doing something they didn’t say or do.
  • Alters footage of an actual event or location.
  • Creates a lifelike scene that didn’t take place.

This is set to include content that has been partially or entirely altered or created using AI-powered audio, video, or image tools.

\u200bOptions for adding an AI generated label

Options for adding an AI generated label

vimeo.com

What’s Next?

For those who have been a bit mistrustful of AI in general, this could be a good indicator of how companies are pushing to provide a bit more transparency around AI content—as well as a straightforward acknowledgment of how dangerous AI can be for trust and authenticity.

We’ll see how quickly this AI labeling system is rolled out with Vimeo and which other platforms (like YouTube and other social apps) follow suit in the coming weeks and months as AI will—undoubtedly—continue to evolve.