Replacing GDPR in the UK: Assessing AI and Research Provisions

Written by

The UK government’s recently reintroduced Data Protection and Digital Information (DPDI) Bill has made plenty of headlines, and represents a crucial component of the nation’s post-Brexit strategy.

One element of the legislation that has come under scrutiny is the implication of proposed changes to data privacy rules relating to research and the use of AI.

The legislation updates the definition of scientific research to clarify that commercial organizations have the same freedoms as academics to process personal data for research purposes. The government claims that current data laws are unclear on how scientists can process personal data for research purposes, “which holds them back from completing vital research that can improve the lives of people across the country.”

An Accelerator or Hindering Consumer Privacy?

Edward Machin, a senior lawyer in Ropes & Gray’s data, privacy & cybersecurity practice, believes that the proposals around scientific data will enable major advancements in research, and “will in time come to be seen as an improvement on the status quo.”

However, Valerie Lyons, COO and senior consultant at Ireland headquartered firm BH Consulting, outlined concerns around its potential impact on consumer privacy. She noted that the GDPR already allows the secondary processing of personal data for scientific research in the commercial sector, as long as the right safeguards are in place, and offering businesses the same freedoms as academics to undertake research could be problematic from a privacy perspective.

Lyons pointed out that academics working in research institutions already have a clear legal basis to carry out their work and have long-standing and robust internal ethical charters and frameworks in place to protect data privacy. This is opposed to the world of business, where the “need to make money overshadows its ability to self-regulate.”

In addition, Lyons is concerned that the Bill’s vague and non-exhaustive definition of scientific research could be exploited by large tech companies to the detriment of individuals’ privacy. The law defines this research that ‘could reasonably be described as scientific,’ which could include activities such as innovative research into technological development.

Read Part 1 of Infosecurity's analysis of the UK's DPDI Bill, which examines the business case behind the updated rules and the potential impact on the UK's adequacy agreement with the EU, here.

Lyons asked: “Do we think that Google, Facebook and the other big tech companies will usurp this ‘innovative scientific research’ loophole to use UK data subjects’ personal information for their own benefit under the guise of research that could reasonably be described as scientific? Do we think that this will encourage organizations who value profit over ethics (‘just because you can doesn’t mean you should’) to stake claim to data centers in the UK?”

Updating Rules on AI

Another ambition of the UK government in bringing this law forward is to update and clarify the rules around the use of innovative technologies, such as AI. While the growing sophistication of AI offers huge opportunities in areas like healthcare, there are concerns about automated decision-making and profiling.

“Do we think that Google, Facebook and the other big tech companies will usurp this ‘innovative scientific research’ loophole to use UK data subjects’ personal information for their own benefit?"

The revamped Bill has also come shortly after the rise of AI chatbot ChatGPT, with OpenAI’s privacy practices coming under scrutiny from experts.

Upon the Bill’s publication, the government claimed that current data protection laws are too complex and lack clarity for solely automated decision-making and profiling “which makes it difficult for organizations to responsibly use these types of technologies.”

The new rules intend to build trust in these technologies by putting safeguards in place related to the use automated decision-making and profiling. This includes offering an opportunity to challenge solely automated decisions, such as being denied a job or loan.

While the updated rules around the use of AI have been emphasized as a crucial aspect of the new law, Jonathan Armstrong, partner at Cordery, doesn’t view them as particularly radical, pointing out that “the UK has already used GDPR to hit an AI developer and other data protection authorities are perfectly able to use the existing GDPR framework to regulate AI.”

Lyons concurred with this assessment, considering it little more than “window dressing.” She noted: “The proposals claim that they will ensure people are made aware of automated decision-making, and can challenge and seek human review, where those decisions may be ‘inaccurate or harmful.” Surely this is already part of GDPR in data subject rights and the right to be informed and the right to rectification?” 

She also emphasized that the EU’s GDPR should not be viewed in isolation, particularly with the EU’s Artificial Intelligence Act currently being prepared to govern the use of AI. “The UK revised GDPR will need to be considered in that light and one would expect a UK AI Regulation will also address the above,” added Lyons. 

The Devil is in the Detail

Other components of the wide-ranging law include a crackdown on nuisance calls and texts, and measures to limit the number of consent pop-ups people see while browsing online.

Additionally, the government is seeking to pave the way for digital ID by establishing a framework for the use of trusted and secure digital verification services.

The experts Infosecurity spoke to also highlighted the need for more information and clarifications to be published alongside the Bill in its current form. While Sarah Pearce, partner at law firm Hunton Andrews Kurth, is generally positive about the emphasis on a risk-based approach to compliance, she believes more clarity is needed on what this means in practical terms.

“It needs to be very clearly spelt out in terms of what they’re envisaging there – at the moment it sounds a bit woolly and not adding much to the risk-based approach that most organizations have followed to date,” she said.

Armstrong highlighted his skepticism for the UK government’s claims around the cost savings, stating that “there is no credible evidence to support the claimed cost savings.”

The government’s updated impact assessment reflecting the rule changes has been submitted Regulatory Policy Committee for independent scrutiny, and will be published following a review. This will be crucial in assessing the overall impact of the law in its current form, added Armstrong.

The updated DPDI Bill is currently undergoing Parliamentary processes, with the first reading taking place on March 8, 2023. The second reading is due to begin on April 17, 2023.

What’s hot on Infosecurity Magazine?