#DataPrivacyWeek: Addressing ChatGPT's Shortfalls in Data Protection Law Compliance

Written by

Generative artificial intelligence models like DALL-E, Stable Diffusion, Midjourney and ChatGPT are making headlines – when they are not writing them or creating the pictures that come with them.

However, like any new technologies, these tools based on models trained on a vast amount of unlabeled data, called foundational models, still need to be regulated.

In a previous article, Infosecurity investigated some of the privacy issues that OpenAI’s chatbot ChatGPT raises. One concern experts highlighted was the risk of releasing inaccurate data into the wild and, as Dennis Hillemann, partner at the law firm Fieldfisher, put it, letting it "feed the model itself." This could potentially lead to the spread of disinformation and bullying campaigns.

Camilla Winlo, head of data privacy at Germserv, said, "Many people would be uncomfortable at their personal information being taken and used for profit by organizations without their knowledge. This kind of process can also make it difficult for people to remove personal information they no longer want to share – if they don’t know an organization has it, they can’t exercise their rights."

Additionally, to address this pitfall that comes with foundational models, "generative AI companies like OpenAI should put in place safeguards, starting with offering the right to rectify when a user discovers inaccurate data. It is a requirement under the EU’s General Data Protection Regulation (GDPR) and other data protection laws," Hillemann added.

"Looking at what OpenAI disclosed to the public on their privacy notices, I cannot see whether the company offers this right," he questioned.

Sharing Data With Third-Parties: More Transparency Needed

Another privacy issue raised with ChatGPT is OpenAI’s admission to "share [the users’] personal information with third parties in certain circumstances without further notice to [them], unless required by the law," found under article 3 of the firm’s privacy policy page.

"OpenAI's tools are used everywhere in the world, and they should be more transparent about what they do with users' personal information."Kohei Kurihara, CEO, Privacy by Design Lab

According to Kohei Kurihara, CEO of the Privacy by Design Lab, the mention "unless required by the law" is vague and insufficient. "OpenAI’s tools are used everywhere in the world, and they should be more transparent about what they do with users’ personal information," he said.

Alexander Hanff, member of the European Data Protection Board's (EDPB) support pool of experts, even considers that the absence of an opt-in principle in the data sharing "clearly violates GDPR, but also Consumer Protection Law in the EU, which requires servicing contracts to be fair and equitable. A unilateral right of one party to use your data without your consent is not equitable or fair."

"There needs to be a legal basis, and the only one they could use in this example would be legitimate interest. But they would need to show that this legitimate interest overrides the right for the individuals to privacy – and they won’t win this case," he added.

Is ChatGPT Compliant With GDPR? Not Sure, Says ChatGPT

When Hillemann and another Fieldfisher lawyer Stephan Zimprich asked the chatbot whether it was compliant with GDPR, ChatGPT said it "cannot say for certain whether ChatGPT is GDPR compliant, as I am not aware of its specific design or implementation. Whether ChatGPT is GDPR compliant will depend on how it is designed and used, and whether it collects, processes or stores any personal data of individuals who are located in the EU."

Again, ChatGPT seems to hold the users, not OpenAI, accountable for data privacy compliance.

For Hillemann, this response is insufficient. "Since we saw that ChatGPT could answer very complex legal questions with a high level of accuracy, this one seems way too simple and vague, especially when it comes to such a critical topic."

The only data protection regulation OpenAI mentions on its privacy policy page is the California Consumer Privacy Act (CCPA), which applies in the state where the company is based. For instance, the company states that California residents have "the rights to know what Personal Information we have collected and how we have used and disclosed that Personal Information; the right to request deletion of your Personal Information; and the right to be free from discrimination relating to the exercise of any of your privacy right."

"The US has a different approach to privacy: in the EU, we require consent to be opt-in whereas, in the US, consent can be assumed in many cases, as long as there is an option to opt out," Hanff explained.

Hanff suggests that OpenAI invest in a privacy team with fact-checkers and someone focusing on ethics to address all these concerns.

OpenAI was contacted by Infosecurity but did not respond to requests for comment on this issue. The company provided a fact sheet but none of the privacy concerns were directly addressed.

What’s hot on Infosecurity Magazine?