Meta to Introduce AI-Generated Content Labeling in May

The tech behemoth Meta, which owns Facebook and Instagram, announced on Friday that it will commence labeling AI-generated media starting in May. This move is part of Meta’s efforts to allay concerns among users and governments regarding the risks associated with deepfakes.

Meta clarified that it will no longer remove manipulated images and audio that do not explicitly violate its rules. Instead, it will focus on labeling and providing contextual information to prevent encroachments on freedom of speech.

These changes are a direct response to critiques from Meta’s oversight board, an independent body that assesses the company’s content moderation decisions. In February, the board urged Meta to urgently revamp its approach to manipulated media due to significant advancements in AI technology, making it easier to create highly convincing deepfakes.

The board’s warning coincided with concerns about the widespread misuse of AI-powered applications for disinformation purposes, especially during pivotal election years globally, including in the United States.

Meta’s forthcoming “Made with AI” labels will help identify content created or altered using AI technology, spanning across video, audio, and images. Moreover, a more prominent label will be applied to content deemed highly likely to mislead the public.

Monika Bickert, Meta’s Vice President of Content Policy, emphasized the importance of transparency and additional context in handling such content, as stated in a blog post.

These labeling measures align with an agreement among major tech giants and AI developers in February to combat manipulated content intended to deceive voters. Meta, along with Google and OpenAI, had previously agreed on a common watermarking standard to tag AI-generated images.

Meta outlined a phased rollout, with AI-generated content labeling set to commence in May 2024. Concurrently, the removal of manipulated media solely based on the previous policy will cease in July. However, content manipulated using AI will still be subject to removal if it breaches other Community Standards, such as those prohibiting hate speech or voter interference.

Recent instances of convincing AI deepfakes have heightened concerns about the accessibility of this technology. The oversight board’s recommendations stemmed from its review of Meta’s decision to keep a manipulated video of US President Joe Biden online, highlighting the potential for misinformation.

In separate incidents, a robocall impersonating Biden and AI-generated speeches in Pakistan underscore the multifaceted challenges posed by AI manipulation in digital spaces. xz xz xz xz xz xz xz xz