AI Accountability Framework Created to Guide Use of AI in Security

Written by

Europol has announced the development of a new AI accountability framework designed to guide the use of artificial intelligence (AI) tools by security practitioners.

The move represents a major milestone in the Accountability Principles for Artificial Intelligence (AP4AI) project, which aims to create a practical toolkit that can directly support AI accountability when used in the internal security domain.

The “world-first” framework was developed in consultation with experts from 28 countries, representing law enforcement officials, lawyers and prosecutors, data protection and fundamental rights experts, as well as technical and industry experts.

The initiative began in 2021 amid growing interest in and use of AI in security, both by internal cybersecurity teams and law enforcement agencies to tackle cybercrime and other offenses. Research conducted by the AP4AI demonstrated significant public support for this approach; in a survey of more than 5500 citizens across 30 countries, 87% of respondents agreed or strongly agreed that AI should be used to protect children and vulnerable groups and to investigate criminals and criminal organizations.

However, there remain major ethical issues surrounding the use of AI, particularly by government agencies like law enforcement. These include concerns about its impact on individual data privacy rights and the prospect of bias against minority groups. In AP4AI’s survey, over 90% of citizens consulted said the police should be held accountable for the way they use AI and its consequences.

Following the creation of the AI accountability framework, the project will now work on translating these principles into a toolkit. The freely available toolkit will help security practitioners implement the accountability principles for different applications of AI within the internal security domain. This aims to ensure they are used in an accountable and transparent manner.

It is hoped the AP4AI project will ultimately ensure police and security forces can effectively leverage AI technologies to combat serious crime in an ethical, transparent and accountable way.

Catherine De Bolle, executive director of Europol, commented: “I am confident that the AP4AI Project will offer invaluable practical support to law enforcement, criminal justice and other security practitioners seeking to develop innovative AI solutions while respecting fundamental rights and being fully accountable to citizens. This report is an important step in this direction, providing a valuable contribution in a rapidly evolving field of research, legislation and policy.”

Professor Babak Akhgar, director of the Centre of Excellence in Terrorism, Resilience, Intelligence and Organised Crime Research (CENTRIC), added: “The AP4AI project will draw upon a huge range of expertise and research to develop world-first accountability principles for AI. Police and security agencies across the globe will be able to adopt a robust AI Accountability Framework so that they can maintain a balanced, proportionate and accountable approach.”

AP4AI is jointly conducted by CENTRI and the Europol Innovation Lab and supported by Eurojust, the EU Agency for Asylum (EUAA), and the EU Agency for Law Enforcement Training (CEPOL) with advice and contributions by the EU Agency for Fundamental Rights (FRA), in the framework of the EU Innovation Hub for Internal Security.

What’s hot on Infosecurity Magazine?