OpenAI, Google, and Microsoft Agree on AI Election ‘Disruptor’ Detection

Several technology giants caution the world about the potential of AI to ‘disrupt’ elections while signing agreements to detect artificial intelligence creations.

In 2024, elections are held in various countries, ranging from Indonesia, South Africa, India, Taiwan, the United States (US), to several countries in Europe.

These countries are said to be overshadowed by AI that could very likely be exploited by irresponsible parties to ‘disrupt’ the democratic festivities.

Therefore, dozens of technology companies involved in building or using AI technology declared cooperation on Friday (16/2) to detect and combat harmful AI content in elections, including the forgery of political candidates.

The signatories include OpenAI, Google, Meta, Microsoft, TikTok, and Adobe.

The agreement, called the “Technology Agreement to Combat Deceptive AI Use in the 2024 Elections,” includes a commitment to collaborate on technology to detect misleading AI-generated content.

Additionally, it aims to be transparent to the public about efforts to address potentially harmful AI content.

“AI does not create fraud in elections, but we must ensure that AI does not assist fraud in proliferating,” said Microsoft President Brad Smith in a statement at the Munich Security Conference on Friday (16/2), as quoted by CNN.

Technology companies generally have a poor track record in self-regulation and enforcing their own policies. Moreover, this new agreement emerges as regulators continue to lag behind in creating boundaries for rapidly advancing AI technologies.

Several new and evolving AI tools offer the ability to quickly and easily generate engaging text and realistic images.

Furthermore, an increasing number of videos and audios, according to experts, can be used to disseminate false information to mislead voters.

The announcement of this agreement comes after OpenAI launched a new highly realistic AI-based text-to-video content generation tool named Sora on Thursday (15/2).

“My worst fear is that we cause significant damage – we, our field, our technology, our industry – causing significant damage to the world,” said OpenAI CEO Sam Altman to the US Congress at a hearing in May last year, urging lawmakers to regulate AI.

Moreover, some companies have partnered to develop industry standards for adding metadata to AI-generated images, enabling systems from other companies to automatically detect that the image was generated by AI.

This new agreement among technology companies advances these cross-industry efforts.

Namely, the signatories pledge to collaborate on efforts such as finding ways to attach machine-readable signals to content indicating its origin.

Additionally, they agree on evaluating their AI models to understand the risk of producing deceptive AI content related to elections.

These companies also state they will work together on educational campaigns to teach the public how to protect themselves from being manipulated or deceived by this content.

Contents of the Agreement

According to Microsoft, here are the commitments of technology companies in this newly proposed collaboration:

  • Developing and implementing technologies to mitigate risks related to deceptive AI content in elections, including open-source tools if necessary.
  • Assessing models included in the scope of this agreement to understand potential risks related to deceptive AI content in elections.
  • Striving to detect the distribution of this content on their platforms.
  • Endeavoring to handle this detected content on their platforms appropriately.
  • Building cross-industry resilience against deceptive AI content in elections.
  • Providing transparency to the public about how companies address it.
  • Continuously engaging with various civil society organizations and global academics.
  • Supporting efforts to raise public awareness, media literacy, and resilience across society.

Companies that have signed this commitment include Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic, and X.