#DataPrivacyWeek Interview: Overcoming Privacy Challenges in AI

Written by

Artificial intelligence (AI) promises benefits to humanity that would have scarcely seemed believable even a decade ago. From detecting cyber-threats in real-time and diagnosing diseases by processing vast volumes of data to facilitating the driverless car revolution, this technology's current and future potential is enormous.

Yet, before its full potential is unleashed, a number of challenges around the use of AI must be addressed. Many of these revolve around its relationship with big data analytics, which potentially calls into question the sanctity of individual privacy, in addition to other ethical dilemmas. To discuss these issues and how they can be overcome, Infosecurity caught up with Katharina Koerner, senior fellow of privacy engineering at the International Association of Privacy Professionals (IAPP), during this year’s Data Privacy Week. 

What are the biggest privacy challenges you have observed relating to the growing use of artificial intelligence (AI)?

AI and machine learning systems require large amounts of data, and the use of big data seems like the antithesis to privacy. How can organizations leverage the full potential of AI while protecting privacy? When building AI and ML systems, privacy principles have to be kept top of mind – and this is what I consider to be the biggest challenge. Privacy principles require organizations to minimize the amount of personal information they collect and hold. The purpose of collecting the data must be clear from the very beginning, and, with a few exceptions, the data should only be used for that purpose. Additionally, bias in AI systems can lead to discrimination, we often see a lack of transparency and explainability of AI algorithms and it has led to new security issues arising. All these aspects, often summarized as ‘trustworthy,’ ‘ethical’ or ‘responsible AI,’ are largely covered by existing privacy regulations and must be considered. 

Katharina Koerner, senior fellow of privacy engineering, IAPP
Katharina Koerner, senior fellow of privacy engineering, IAPP

What additional threats to privacy could/are manifesting due to increasing use of AI?

Facial recognition is a good example of a technology based on AI with threats to privacy. It can be leveraged to steal identities, stalk people and create disadvantages in the job market. In addition, it can be used for predatory marketing, is susceptible to generating false positives and is often used without permission. These are all threats to our privacy. Even more concerning is we cannot reset our biometrics like passwords if there is a data breach. Regulators are aware of the risks, and we’ve seen an increase in bans of the technology in public globally. Almost 200 (193) countries, including China, signed UNESCO's first-of-its-kind recommendation for AI ethics in November 2021, which called for the end of using AI for “social scoring or mass surveillance purposes.” In the same month, Clearview AI was prosecuted in Australia, the UK, Canada and France for collecting images and biometric data without consent almost simultaneously. 

Do you believe data privacy laws worldwide are currently sufficient to deal with these issues? Are you optimistic we are starting to see positive moves in this direction?

Currently, AI governance principles and privacy regulations seem fragmented. Yet, when it comes to the processing of personal data, there is actually a big overlap between non-binding principles for the responsible use of AI and current privacy regulations. Additionally, new regulations regarding AI like the EU Artificial Intelligence Act will be coming up next year, while the Federal Trade Commission (FTC) has filed for rulemaking authority on privacy and AI. This mixture of regulations, guidelines and requirements will become very complex but should be comprehensive. In my view, the true challenge is not so much the lack of regulations but the difficulty of putting them into practice. 

Can AI technologies be designed in a way that protects individual privacy but does not stifle innovation in this area? How can this be achieved?

Yes, privacy-by-design for AI applications and ML pipelines is possible. It can even facilitate collaboration, research and innovation. In a world that evolves around consumer trust, the value of personal data is tied to its responsible use. The foundation of AI technologies should be appropriate governance processes that emphasize privacy and ensure compliance. Proper privacy impact assessments need to be conducted and the findings must be addressed. Privacy-enhancing technologies, including differential privacy and synthetic data, seem very promising for mitigating risks. The technologies and tools capable of addressing privacy concerns and translating them into technical design choices are out there and are good resources to rely on. 

What’s hot on Infosecurity Magazine?