Data Privacy Day: Why AI’s Rise Makes Protecting Personal Data More Critical Than Ever

Written by

January 28 marks Data Privacy Day. Founded in 2007 by the Council of Europe, the aim of Data Privacy Day is to raise public awareness about the right to personal data protection and privacy.

Now in its 19th year, Data Privacy Day faces a much-changed world to the one it originated in: the first iPhone was revealed just two weeks before the first Data Privacy Day, leading to the smartphone revolution – and a revolution in how we access and interact with the internet, both at work and at home.

In 2026, we are in the middle of another technological revolution, this time brought about by the rise of artificial intelligence (AI).

AI isn’t new, it had been around for many years, commonly labelled as machine learning and often used for applications like scientific research.

But when OpenAI launched ChatGPT in late 2023, everything changed – the Large Language Model (LLM) opened AI use to wider world. It didn’t take long for Microsoft, Google and many other technology companies and software vendors to release their own LLMs and AI tools.

The advantage of these AI tools, we’re told, is that using them makes us more efficient. For example, people can use AI tools embedded in Microsoft 365 or third-party LLMs to help write emails, summarize documents and more. But are employees keeping data privacy in mind as they do so.

AI Agents and LLMs as a Data Privacy Risk

According to the The LayerX Enterprise AI & SaaS Data Security Report 2025, 77% of employees said they have pasted company information into AI or LLM services, while 82% of those who said they have done this also said that they have used a personal account to do so.

This creates cybersecurity and data privacy risks on two fronts. The first is that the information employees are feeding to an LLM as prompts potentially contains sensitive corporate data. That means it runs the risk of being leaked.

In theory, AI companies have guardrails in place to ensure that any information used to prompt an LLM, no matter the context or the source, cannot be reverse engineered to reveal the data.

However, attackers have been known to bypass such security features by using prompt injections, malicious queries disguised as legitimate to manipulate the code and trick the AI into revealing data it shouldn’t.

Secondly, the use of personal ChatGPT, Claude, Gemini, Co-Pilot or personal AI accounts presents a problem for enterprises: that corporate information is being transferred to and uploaded to models by accounts which aren’t monitored by security teams.

That doesn’t just create a data privacy risk around the data being uploaded to LLMs, but if the sensitive corporate information used to make those queries is left sitting in the personal email or cloud storage accounts of employees, then the business is at risk of a data breach if that personal account is hacked.

In each of these scenarios, the employee is using AI to be more efficient at work: they’re not actively attempting to jeopardize data privacy, but without the correct tools with the correct rules in place, there are risks.

Securing AI in the Enterprise

One factor that is key to ensuring data privacy and data security is to thoroughly audit what data an organization holds and where it is stored. Without this knowledge, ensuring data privacy is a significant challenge.

“The area where more work is required is putting technology-based controls around AI policies and procedures. That is still one of the best ways to make sure they’re being adhered to,” Kamran Ikram, senior managing director and cyber security and resiliency lead at Accenture told Infosecurity.

“That starts with having a good inventory of the data that exists. Because if you don’t know if it exists, you don’t know if it’s being used,” he added.

Organizations should also ensure that appropriate protections are in place to identify and prevent potential data privacy breaches.

“Have the right controls around that data to make sure only the right people can access and use it. There’s another benefit of that which is if a threat actor infiltrates your organization, if you have the right controls, it limits what they can see and access,” said Ikram.

Technical controls on how data is used in relation to AI tools is a must for data privacy, but it isn’t the only protection which should be put in place. It’s also vital for organizations to provide training to ensure that employees know how to leverage AI appropriately – and that they know what classes as inappropriate, risky use with implications for data privacy.

“Focus on the employees. Empower the workforce to be able to use these tools by giving them that proper guidance,” Chris Gow, senior director of EU public policy and head of government affairs at Cisco told Infosecurity.

For Gow, it’s also important that businesses who expect staff to use AI tools provide them with enterprise versions of those tools, to reduce the risk of data leaks or breaches via unauthorized personal AI applications

“As a company you can get enterprise versions of these tools: that’s going to encourage your employees to use them, rather than looking for shadow AI externally,” he said.

As with other areas of corporate data privacy, such as GDPR compliance, training should also form part of the strategy to ensure that staff are informed about how to appropriately handle data. They should also be provided with guides on what to do if they suspect a potential breach of data via an AI tool.

As AI becomes more embedded in workplaces, applications, services and society, people will expect their data to be handled carefully. Having full-fledged data privacy plans in place to ensure that is the case is therefore vital.

“Curating your data and having a privacy program in place isn’t just a compliance cost. There are clear benefits from that and it’s something that will be recognized more in an AI world,” says Gow.

Conclusion 

At a time when many businesses are moving swiftly to adopt AI into their ecosystems, Data Privacy Day should act as a catalyst for those organizations to think about the privacy and security issues which could occur if AI solutions are implemented incorrectly.

As well as understanding what data the organization holds, where it is and how it is used, data privacy leaders should ensure that their staff are trained and knowledgeable about how to appropriately handle data when using AI.

What’s Hot on Infosecurity Magazine?