Facebook and Instagram to Label All AI-Generated Images

Meta announced that they will introduce technology capable of detecting and labeling images generated by artificial intelligence (AI) tools from other companies.

This will be implemented across their platforms including Facebook, Instagram, and Threads.

Meta has already been labeling AI-generated images produced by their own system. They stated that they hope this new technology, still in development, will create “momentum” for the industry to address AI-generated fakes.

However, an AI expert informed BBC that such tools are “easily circumvented.”

In a blog post by senior executive Sir Nick Clegg, Meta stated their intention to expand AI-generated fake labeling “in the coming months.”

In an interview with Reuters news agency, he acknowledged that the technology is “not yet fully mature,” but said the company aims to “create momentum and incentives for other industries to follow suit.”

‘Easily Circumvented’

However, Prof Soheil Feizi, director of the Reliable AI Lab at the University of Maryland, suggested that the system could be easily bypassed.

“They may be able to train their detectors to flag some images particularly generated by some specific models,” he told BBC.

“But those detectors can be easily bypassed with light processing over the images, and they may also have a high false positive rate.

“So, I think it is unlikely for various broad applications.”

Meta has acknowledged that their tools will not work for audio and video—though these are media of concern regarding AI manipulation.

The company stated that instead, they’re asking users to label their own audio and video posts, and they “may enforce penalties if they fail to do so.”

Sir Nick Clegg also admitted it’s not possible to test text generated by tools like ChatGPT.

“The ship has sailed on that,” he told Reuters.

‘Incoherent Media Policy’

On Monday, Meta’s Oversight Board criticized the company for its policy on manipulated media, calling it “incoherent, lacking convincing justification, and overly focused on how the content is made.”

The Oversight Board is funded by Meta but independent from the company.

The criticism came in response to a decision regarding a video of US President Joe Biden. The video edited images of the president with his granddaughter to make it appear as if he was inappropriately touching her.

As it wasn’t manipulated using artificial intelligence, and depicted Mr. Biden behaving in a way he actually didn’t, rather than saying something he didn’t, it didn’t violate Meta’s manipulated media policy—and wasn’t removed.

The Board agreed that the video didn’t violate Meta’s current rules on fake media, but said those rules should be updated.

Sir Nick told Reuters he generally agrees with the decision.


He acknowledged that Meta’s existing policies “just don’t fit in an environment where you will have synthetic content and hybrid content much more than before.”

Since January, the company has had a policy stating that political ads must label when they use digitally altered images or videos.