EU Passes Landmark Artificial Intelligence Act

Written by

It’s a done deal. The EU’s Artificial Intelligence Act will become law. The European Parliament adopted the latest draft of the legislation with an overwhelming majority on June 14, 2023.

Introduced in April 2021, the AI Act aims to strictly regulate AI services and mitigate the risk it poses. The first draft, which included measures such as adding safeguards to biometric data exploitation, mass surveillance systems and policing algorithms, pre-empted the surge in generative AI tool adoption that started in late 2022.

Its latest draft, introduced in May 2023, introduced new measures to control “foundational models.”

These include a tiered approach for AI models, from ‘low and minimal risk’ through ‘limited risk,’ ‘high risk’ and ‘unacceptable risk’ AI practices.

The ‘low and minimal risk’ AI tools will not be regulated, while the ‘limited risk’ ones will need to be transparent. The ‘high-risk’ AI practices, however, will be strictly regulated. The EU will require a database of general-purpose and high-risk AI systems to explain where, when and how they’re being deployed in the EU.

“This database should be freely and publicly accessible, easily understandable, and machine-readable. It should also be user-friendly and easily navigable, with search functionalities at minimum allowing the general public to search the database for specific high-risk systems, locations, categories of risk [and] keywords,” the legislation says.

AI models involving ‘unacceptable risk’ will be banned altogether.

Edward Machin, a senior lawyer in the data, privacy & cybersecurity team at the law firm Ropes & Gray, welcomed the legislation: "Despite the significant hype around generative AI, the legislation has always been intended to focus on a broad range of high-risk uses beyond chatbots, such as facial recognition technologies and profiling systems. The AI Act is shaping up to be the world’s strictest law on artificial intelligence and will be the benchmark against which other legislation is judged.”

Kevin Bocek, VP for ecosystem and community at Venafi, also praised the layered approach taken by the EU: "The great thing about the EU’s AI Act is that it proposes assigning AI models identities, akin to human passports, subjecting them to a conformity assessment for registration on the EU’s database. This progressive approach will enhance AI governance, safeguarding individuals and help to maintain control. For businesses using and innovating with AI, they’ll need to start evaluating if their AI falls under the categories of risk proposed in the AI Act and comply with assessments and registration to uphold safety and public trust.”

Just like with the General Data Protection Regulation (GDPR) for the protection of personal data, the AI Act will also be the first AI legislation in the world to impose heavy fines for non-compliance, with up to €30m ($32m) or 6% of global profits.

UK: Innovation Over Regulation

With this pioneering regulation, EU lawmakers hope other countries will follow suit. In April, 12 EU lawmakers working on AI legislation called for a global summit to find ways to control the development of advanced AI systems.

While a few other countries have started working on similar regulations, such as Canada and its AI & Data Act, the US and the UK seem to take a more cautious approach to regulating AI practices.

In March, the UK government said it was taking “a pro-innovation approach to AI regulation.” It launched a white paper explaining its plan, in which there will be no new legislation and regulatory body for AI. Instead, responsibility will be passed to existing regulators in the sectors where AI is applied.

In April, the UK announced that it would invest £100m ($125m) to launch a Foundation Model Taskforce, which is hoped to help spur the development of AI systems to boost the nation's GDP.

On June 7, British Prime Minister Rishi Sunak announced that the UK will host the first global AI summit this fall 2023.

Later, on June 12, Sunak announced at the London Tech Week that Google DeepMind, OpenAI and Anthropic have agreed to open up their AI models to the UK government for research and safety purposes.

Machin commented: “It remains to be seen whether the UK will have second thoughts about its light-touch approach to regulation in the face of growing public concern around AI, but in any event the AI Act will continue to influence lawmakers in Europe and beyond for the foreseeable future.” 

Lindy Cameron, CEO of the UK National Cyber Security Centre (NCSC), mentioned the leading role of the UK in AI development during her keynote address to Chatham House’s Cyber 2023 conference on June 14.

She said that “as a global leader in AI – ranking third behind the US and China – […] the UK is well placed to safely take advantage of the developments in artificial intelligence. That’s why the Prime Minister’s AI Summit comes at a perfect time to bring together global experts to share their ideas.”

While she outlined the three goals of the NCSC in addressing the cyber threats posed by generative AI – help organizations understand the risk, maximize the benefits of AI to the cyber defense community and understand how our adversaries […] are using AI and how we can disrupt them – she did not mention AI regulation.

What’s hot on Infosecurity Magazine?