The European Parliament approves the world’s first comprehensive artificial intelligence law

The European Parliament has approved the world’s first comprehensive framework to mitigate the risks of artificial intelligence (AI).

The sector has experienced explosive growth, driving massive profits but also fueling fears about biases, privacy, and even the future of humanity.

The AI Act works by categorizing products according to risk and adjusting scrutiny accordingly.

The creators of the law said it would make technology more “human-centered”.

“The AI law is not the end of the journey but the starting point for new governance built around technology,” added MEP Dragos Tudorache.

It also puts the EU at the forefront of global efforts to address the dangers associated with AI.

China has already introduced a set of AI laws. In October 2023, US President Joe Biden announced an executive order requiring AI developers to share data with the government.

But the EU has now gone further.

“The adoption of the AI Act marks the beginning of a new era of AI, and its importance cannot be overstated,” said Enza Iannopollo, a principal analyst at Forrester.

“The EU’s AI Act is the first and only set of binding requirements to mitigate AI risks in the world,” she added.

She said it would make the EU the global “de facto” standard for trustworthy AI, leaving all other regions, including the UK, to “catch up”.

In November 2023, the UK hosted an AI security summit but is not planning legislation similar to the AI Act.

How the AI Act will work The main idea of the law is to regulate AI based on its ability to cause harm to society. The higher the risk, the stricter the rules.

AI applications posing a “clear risk to fundamental rights” will be prohibited, for example, some involving the processing of biometric data.x,x,x,x,x,x

AI systems deemed “high-risk”, such as those used in critical infrastructure, education, healthcare, law enforcement, border management, or elections, will have to comply with strict requirements.

Low-risk services, such as spam filters, will face lighter regulation: the EU expects most services to fall into this category.

The Act also creates provisions to address risks posed by systems supporting generative AI tools and chatbots like OpenAI’s ChatGPT.

These would require producers of some general-purpose AI systems, which can be leveraged for a variety of tasks, to be transparent about the material used to train their models and comply with EU copyright law.

Mr. Tudorache told reporters before the vote that copyright provisions had been one of the “most pressed” parts of the bill.

OpenAI, Stability AI, and graphics chip giant Nvidia are among a handful of AI companies facing lawsuits for their use of data to train generative models.

Some artists, writers, and musicians have argued that the process of “scraping” large volumes of data, potentially including their own works, from virtually every corner of the internet violates copyright laws.

The Act still has several more steps to go through before formally becoming law.

Legal linguists, whose job is to review and translate laws, will scrutinize its text, and the European Council – composed of representatives from EU member states – will also need to endorse it, though it’s expected to be a formality.

Meanwhile, companies will be working to comply with the legislation.

Kirsten Rulf, a former advisor to the German government and now a partner at Boston Consulting Group, says over 300 companies have contacted her firm so far.

“They want to know how to scale the technology and get value from AI,” she told the BBC.

“Companies need and want legal certainty.”