EU Reaches Agreement on AI Act Amid Three-Day Negotiations

Written by

The EU reached a provisional deal on the AI Act on December 8, 2023, following record-breaking 36-hour-long ‘trilogue’ negotiations between the EU Council, the EU Commission and the European Parliament.

The landmark bill will regulate the use of AI systems, including generative AI models like ChatGPT and AI systems used by governments and in law enforcement operations, including for biometric surveillance.

The Tiered Approach Maintained

The final draft maintained the tiered approach regarding the measures control foundational models, with horizontal obligations for all models and special treatments for ‘high risk’ and ‘unacceptable risk’ AI practices.

All foundational AI providers will need to make their models transparent and publish a detailed summary of the training data “without prejudice of trade secrets.”

AI-generated content will have to be immediately recognizable.

The ‘high-risk’ AI practices will be strictly regulated, with obligations like model evaluation, assessing and keeping track of systemic risks, cybersecurity protections and reporting on the model’s energy consumption.

The provisional agreement also provides for a fundamental rights impact assessment before its deployers put a high-risk AI system on the market.

The ‘unacceptable risk’ ones will be banned. These include manipulative techniques, systems exploiting vulnerabilities, social scoring, and indiscriminate scraping of facial images.

An automatic categorization as ‘systemic’ for models trained with computing power above 10-25 floating point operations was also added.

Gaurav Kapoor, co-CEO and co-founder of compliance solutions provider MetricStream, praised the agreement.

“I believe this legislation is a crucial step forward in ensuring the responsible and beneficial development and use of AI. The EU AI Act sets out a comprehensive framework that attempts to strike the right balance between promoting innovation and protecting fundamental rights and preventing malfeasance,” he told Infosecurity.

Which AI Uses Are Exempt?

A certain number of AI models and practices will be exempted from regulation.

First, free and open source models will not have to comply with any control measures outlined by the law.

Second, the EU Council introduced several exemptions for law enforcement operations, including the exclusion of sensitive operation data from transparency requirements or the use of AI in exceptional circumstances related to public security.

The EU will require a database of general-purpose and high-risk AI systems to explain where, when and how they’re being deployed in the EU, even when it’s by a public agency.

EU countries, led by France, Germany and Italy, insisted on having a broad exemption for any AI system used for military or defense purposes, even when the system is provided by a private contractor.

Speaking to Infosecurity, Laura De Boel, a partner in the Brussels office of Wilson Sonsini Goodrich & Rosati, commented: “These governments have been pushed by AI companies in their countries to go against a too strict regulation on foundation models and generative AI.”

In the final draft, systems used exclusively for military or defense purposes will not have to comply with the Act.

Similarly, the agreement provides that the regulation would not apply to AI systems used for the sole purpose of research and innovation or to people using AI for non-professional reasons.

A New Governance Architecture

A new AI Office within the Commission will be set up. Its task will be to oversee these most advanced AI models, contribute to fostering standards and testing practices, and enforce the common rules in all member states.

A scientific panel of independent experts will also advise the AI Office about general-purpose AI (GPAI) models.

An AI Board, which will comprise member states’ representatives, will serve as a coordination platform and an advisory body to the EU Commission and will give an essential role to EU member states in implementing the regulation, including the design of codes of practice for foundation models.

Finally, an advisory forum for stakeholders, such as industry representatives, small and medium enterprises (SMEs), start-ups, civil society and academia, will be set up to provide technical expertise to the AI Board.

Record-High Penalties

The AI Act’s provisional penalties for non-compliance are the highest fines ever voted by the EU.

Here again, they are following a tiered approach:

  • Violations of the banned AI applications will result in €35 million in fines or 7% of the offending company’s global annual turnover in the previous financial year – whichever is higher.
  • Violating the AI act’s obligations will result in €15m in fines or 3% of the offending company’s global annual turnover in the previous financial year – whichever is higher.
  • The supply of incorrect information will result in €7,5m in fines or 1,5% of the offending company’s global annual turnover in the previous financial year – whichever is higher.

Ilona Simpson, CIO for the EMEA region at Netskope, commented: “The fines imposed for non-compliance follow the well-established model of GDPR enforcement; fines cap out at 7% of global turnover compared with 4% under GDPR, meaning there can be no criticism over whether this law has teeth.”

However, the provisional agreement provides for more proportionate caps on administrative fines for SMEs and start-ups in case of infringements of the provisions of the AI Act.

Work will now continue at the technical level in the coming weeks to finalize the details of the new regulation. The presidency will submit the compromise text to the member states’ representatives (Coreper) for endorsement once this work has been concluded.

The entire text will need to be confirmed by the EU Council and the European Parliament and undergo legal-linguistic revision before formal adoption by the co-legislators.

The AI models with ‘unacceptable risk’ will start to be banned six months after the AI Act enters into force.

Requirements for high-risk AI systems, powerful AI models, the conformity assessment bodies, and the governance chapter will start applying one year after the law has been adopted.

The rest of the requirements will apply one year later.

A grace period of up to three years for companies to adjust their practices has been discussed.

A Blueprint for AI Regulation?

According to Kapoor, the EU AI Act will have repercussions all over the globe, including in the US.

“The EU AI Act will set global ethical standards for AI, significantly influencing US regulatory considerations. The EU AI Act will make it difficult for the US to pass its own laws, as global companies will seek a unified regulatory framework,” he said.

To promote greater alignment and flexibility, the US may have to align its strategic AI governance approach with the EU, Kapoor added. “But, until then, the EU and US should work on shared collaboration on platform governance and AI research for developing shared standards. This knowledge exchange at various levels is essential for cross-border cooperation and realizing global AI benefits. US-EU harmonization at this juncture is paramount for global AI advancements.”

What’s hot on Infosecurity Magazine?