#DataPrivacyDay: Key GDPR Compliance Issues to Watch in the Artificial Intelligence Space

Written by

Organizations increasingly use artificial intelligence (AI) driven solutions in their day-to-day business operations. For example, many organizations rely on AI to continuously train algorithms and improve their products and services, using so-called machine learning (ML) models. Generally, these AI-driven solutions require the processing of significant amounts of personal data for the model’s own training, which is often not the purpose for which the personal data was originally collected. There is a clear tension between such further use of vast amounts of personal data and some of the key data protection principles outlined in privacy regulation.

Laws like the General Data Protection Regulation (GDPR) aim to minimize the processing of individuals’ personal data to what is strictly necessary to achieve a specific, well-defined purpose. While the GDPR does not rule out the further use of personal data in connection with AI-driven solutions, businesses should consider a number of key GDPR compliance considerations when relying on AI-driven solutions. 

First, there are certain GDPR challenges of a contractual nature that service providers need to address with their corporate customers. To further process the personal data received from a corporate customer to improve the service provider’s services using AI training and analysis, the customer agreement should clearly anticipate such further processing of personal data. In addition, the agreement should define each party's roles (controller vs. processor) and responsibilities in this respect. Furthermore, the agreement should address how individuals will be informed about the further AI-related processing of their personal data and, if necessary, how their consent will be obtained or how they will be given the ability to object prior to using their information to train ML models.

The UK’s Information Commissioner (ICO) has issued draft guidance on its AI auditing framework, taking the position that “if you initially process data on behalf of a client as part of providing them a service, but then process that same data from your clients to improve your own models, then, you are a controller for this processing.”

Similarly, the French Commission Nationale de l’Informatique et des Libertés (CNIL) has recently issued guidance on the further use of personal data by data processors. In its guidance, the CNIL does not rule out further use of personal data by data processors, but sets forth strict conditions that must be met. These conditions include, among others, that the corporate customer (i.e., the original controller) should give written authorization to the service provider after carrying out a compatibility test, which should assess whether the envisaged further use of the personal data is compatible with the original purpose for which it was collected. Furthermore, the original controller is deemed responsible for ensuring that individuals are informed about the further use of their personal data and can object.

"An AI risk assessment should be conducted to identify and mitigate potential GDPR risks"

Such considerations are relevant for contract negotiation, which might become complex when there is a cross-customer service component and third-party matching and reuse of the input data. In addition, the contract work may involve allocating certain risk-mitigation responsibilities between the parties, such as amending privacy notices (or otherwise increasing transparency) or de-identifying data elements prior to running ML training, where possible. 

Second, an AI risk assessment should be conducted to identify and mitigate potential GDPR risks. The assessment should focus on any impact the AI solution may have on individuals and identify risk mitigation measures where necessary. It should also evaluate how the proposed AI solution complies with key principles of the GDPR, such as necessity and proportionality, purpose limitation and limited retention of personal data in a format that permits identification of individuals.

The assessment should further document any trade-offs that need to be made. For example, as ML models learn from data and more information is fed into the model, the more accurate the model’s prediction is likely to be. However, the more information is processed, the greater the privacy and cybersecurity risks are. In addition, the assessment should identify remediation steps, such as implementing privacy-enhancing technologies, including pseudonymization and encryption of personal data where technically feasible. Where AI is used to make automated decisions, such as creating a score and approving or rejecting actions based on that score, it should be verified whether the decision involves human input. If not, the GDPR restrictions concerning automated individual-decision making should be analyzed and considered. Last but not least, the AI risk assessment confirms that the AI solution is used ethically and does not lead to unwanted bias. 

Third, from an internal governance perspective, the use of AI-driven solutions in an organization should be taken seriously. The organization’s leadership and oversight functions should be aware of these solutions and review their use. As a practical matter, this may require setting up cross-functional teams and collaboration between senior management, privacy experts and data scientists. These teams should work together where possible to elevate the organization’s accountability posture around the use of AI, for example, by embedding decisions on how the organization handles AI issues in the organization’s privacy compliance program. This may involve, among others, maintaining accurate records of AI-related data processing activities, publishing training materials and implementing internal policies and InfoSec measures.

In addition to the GDPR issues, manufacturers, distributors and users of AI-driven solutions in the EU will have to consider additional obligations in the future after the draft EU AI Regulation has been adopted.

In conclusion, the increasing use of AI in regular business operations triggers privacy and security challenges that require thoughtful consideration from a contractual, compliance and internal governance perspective.    

What’s hot on Infosecurity Magazine?