Key Takeaways
Meta is preparing to roll out its new content labeling system that will automatically tag artificially generated video, audio and images as “Made with AI”.
The new system is designed to help users identify manipulated media. However, unless they breach Meta’s terms of use, AI deepfakes are still allowed on the platforms.
In an article outlining Meta’s new approach to AI-generated content, Vice President of Content Policy Monika Bickert explained that the company’s existing approach is too narrow for the wide range of challenges presented by modern AI:
“Our manipulated media policy was written in 2020 when realistic AI-generated content was rare and the overarching concern was about videos. In the last four years, and particularly in the last year, people have developed other kinds of realistic AI-generated content like audio and photos, and this technology is quickly evolving.”
She said that the strategy of labeling content that was created with AI was designed to ensure transparency without curtailing users’ freedom of expression
However, posts will be removed if they violate Meta’s Community Standards. For example, Meta will remove all content that doesn’t comply with the firm’s rules concerning election interference, bullying and harassment, or violence and incitement.
The new AI label builds on Meta’s existing “Imagined with AI” tag added to images created with its in-house AI image generator.
To identify a broader range of AI-generated content, the firm has been working with other AI developers to create common technical standards for embedding information in file metadata and invisible watermarks.
These signals will be used to automatically label content that contains them as “made with AI.” Meta will also require people who upload AI-generated videos, images or recordings to disclose their use of the technology for appropriate labeling.
Alongside the standard Made by AI label, Meta said it will continue to flag posts that are misleading or false, while certain AI-generated content will be labeled with additional information:
“If we determine that digitally-created or altered images, video or audio create a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label so people have more information and context.”
Although Meta and its partners’ efforts could make it harder to pass AI-generated content off as real, anyone who wants to circumvent the rules only needs to alter file metadata or use a different tool that doesn’t embed hidden markers of AI generation.
Even detection tools that analyze images and videos for patterns that suggest they were created with AI don’t have a 100% accuracy rate.
Meta’s critics have condemned the company for not doing enough to prevent abusive deepfakes from circulating on its platforms. But ultimately, it may never be able to completely stamp them out. As fast as social media firms implement new policies and tactics to police the technology, malicious actors will likely develop novel ways to bypass restrictions.