EU reaches agreement to regulate AI with the world’s first laws

AIDigital
-

In a historic move, the European Union (EU) has reached an agreement on the world’s first comprehensive laws to regulate artificial intelligence (AI), following a 37-hour negotiation between the European Parliament and member states.

The deal, described as “historic” by European Commissioner Thierry Breton, extends beyond AI to cover social media and search engines, impacting major platforms like X, TikTok, and Google.

The suite of laws positions the EU ahead of the United States, China, and the UK in the global race to regulate AI and address potential risks associated with the rapidly advancing technology.

The negotiations involved approximately 100 individuals who worked for nearly three days to finalise the deal. Spain’s Secretary of State for AI, Carme Artigas, facilitated the talks, with support reported from France and Germany.

However, there were conflicts over the regulation approach, with reports suggesting that tech companies in these countries sought a lighter touch to encourage innovation among smaller enterprises.

While specific details about the forthcoming law were not provided, it is expected to take effect no earlier than 2025. The agreement addressed key issues such as foundation models designed for general purposes and AI-driven surveillance, which raised concerns about real-time monitoring and emotional recognition.

Notably, the European Parliament secured a ban on real-time surveillance and biometric technologies, including emotional recognition, with exceptions for unexpected terrorist threats, the search for victims, and the prosecution of serious crimes. MEP Brando Benefei emphasised the legislation’s goal of ensuring a human-centric approach, respecting fundamental rights, building trust, and navigating the AI revolution responsibly.

One significant outcome was the establishment of a risk-based tiered system for AI regulation, where the level of regulation corresponds to the risk posed to health, safety, and human rights. The highest-risk category is determined by the number of floating-point operations per second (Flops) required for machine training. Sources indicated that only one model, GPT4, currently falls into this highest-risk definition.

The agreement also imposes major obligations on AI services, including disclosure of data used for machine training. Dragoș Tudorache, the Romanian MEP leading the European Parliament’s four-year battle to regulate AI, highlighted the importance of setting real regulations for AI and guiding its development in a human-centric direction.

Why is this important?

The EU’s commitment to comprehensive AI regulation may serve as a global example for other governments considering similar measures. Anu Bradford, a Columbia Law School professor and EU digital regulation expert, suggested that while other countries might not copy every provision, they are likely to emulate many aspects of the EU’s regulatory framework, potentially influencing the behavior of AI companies globally.

Author spike.digital