Facebook and Instagram to Label All Fake AI Images

Meta has announced that it will introduce technology capable of detecting and labeling images generated by artificial intelligence (AI) tools from other companies.

This technology will be implemented across its platforms, including Facebook, Instagram, and Threads.

Meta already labels AI-generated images produced by its own systems. The company hopes that the new technology, still in development, will generate “momentum” for the industry to address AI forgery.

However, an AI expert informed the BBC that such tools are “easily evadable.”

In a blog post by senior executive Sir Nick Clegg, Meta revealed its plans to expand the labeling of AI fakes “in the coming months.”

In an interview with Reuters, Sir Nick Clegg acknowledged that the technology is “not yet fully mature,” but the company aims to “create a sense of momentum and incentive for the rest of the industry to follow.”

‘Easy to Evade’ However, Professor Soheil Feizi, director of the Reliable AI Lab at the University of Maryland, suggested that such a system could be easily circumvented.

“They may be able to train their detector to flag some images specifically generated by some specific models,” he told the BBC.

“But those detectors can be easily evaded by some lightweight processing on top of the images, and they also can have a high rate of false positives.

“So I don’t think that it’s possible for a broad range of applications.”

Meta has acknowledged that its tool will not work for audio and video—despite these being the media types that raise most concerns about AI fakes.

The company stated that it is asking users to label their own audio and video posts, and it “may apply penalties if they fail to do so.”

Sir Nick Clegg also admitted that it would be impossible to test for text generated by tools such as ChatGPT.

“That ship has sailed,” he told Reuters.

‘Inconsistent’ Media Policy On Monday, Meta’s Oversight Board criticized the company for its policy on manipulated media, calling it “incoherent, lacking in persuasive justification, and inappropriately focused on how content has been created.”

The Oversight Board, funded by Meta but independent of the company, issued the criticism in response to a ruling on a video of US President Joe Biden. The video edited existing footage of the president with his granddaughter to make it appear as though he was touching her inappropriately.

As the video was not manipulated using artificial intelligence and depicted Mr. Biden behaving in a way he did not, rather than saying something he did not, it did not violate Meta’s manipulated media policy and was not removed.

While the Board agreed that the video did not break Meta’s current rules on fake media, it suggested that the rules should be updated.

Sir Nick told Reuters that he broadly agreed with the ruling and admitted that Meta’s existing policy “is just simply not fit for purpose in an environment where you’re going to have way more synthetic content and hybrid content than before.”

Since January, the company has had a policy in place requiring political ads to signal when they are using digitally altered images or video.