Data Privacy Week: AI Has Put Data Privacy Top of Mind

Written by

It’s clear that 2023 was a significant year for AI innovation, with advancements already reshaping industries and redefining the way businesses think and operate. Fuelled by the data economy, the integration of AI into countless operational tasks has allowed businesses to make faster decisions, further automate processes, and better predict behavior with remarkable efficiency.

Generative AI systems are already being used, both directly and indirectly, by businesses the world over to support employees in notation, chatbot and routine content generation tasks.

However, as AI companies enter a race to allow their services to collect as much data as possible to pour into training AI models, there are concerns about the potential misuse of sensitive information and the erosion of privacy. For businesses, the rise of AI has raised inevitable questions around data privacy and made clear the need to safeguard sensitive information, prompting businesses to reassess and fortify their data protection strategies.

Increasing Regulatory Intervention in AI

Acutely aware of the risks, regulators are grappling with the need to strike a balance between fostering innovation and ensuring data privacy. In a crucial move toward safety and better protection, the EU reached a provisional agreement on its AI Act in December 2023.

Included in the Act is the requirement that all AI providers used by governments and law enforcement operations must make their models transparent and publish a detailed summary of the training data used.

Aligned with this sentiment, in December 2023, the UK’s Information Commissioner warned companies to consider customer personal information in all circumstances when using AI, or risk consumer (and therefore innovation) decline.

It’s clear that as regulators rush to catch up with the pace of innovation, data privacy is on the agenda for 2024, and businesses must start the year on the front foot to ensure they are not exposing sensitive or proprietary data to unnecessary risk via third party AI apps and integrations. Companies that do not take these measures put themselves at risk of serious breaches and non-compliance.

Unfortunately, apps such as ChatGPT are often mis-used by employees who knowingly or unknowingly upload personal data, highly sensitive proprietary source code, or even sensitive financial information into the platform. In fact, Netskope’s Threat Labs researchers discovered that source code is posted to ChatGPT more than any other type of sensitive data in the workplace, at a rate of 158 incidents per 10,000 users per month. So how do we manage this behavior and minimize risk?

How to Solve Data Protection Challenges with AI

Fortunately, many data protection issues caused by AI use can also be solved with the help of AI. Here are four ways to ensure data is protected by mobilizing AI within a business’s security posture.

1. Sensitive Data Categorization

The first step to prioritizing data privacy is being able to decipher what information requires protection. Data categorization is a key stage in this process, and it is an extremely time consuming task if performed manually. Fortunately, AI and machine learning (ML) can be harnessed for automated categorization of data across department systems.

For example, AI and ML can be used to scan images, identify personal data, financial data, securing passes, or even contracts that include sensitive terms – then categorize appropriately for data protection policy handling. Once data is categorized, it can be managed and protected by security controls with greater efficiency.

2. Employee Training

By advocating for greater AI awareness training in the workplace, security leaders can help defend their organizations against data loss and protect critical information. However, if employees are subject to once-yearly training sessions on the subject, as is unfortunately often the case, knowledge is unlikely to be retained for use at the crucial moment.

For successful outcomes, leaders should look to install real-time coaching technology (again, powered by AI) to remind employees of company policy, such as the associated risks of uploading sensitive data to AI apps. If necessary, security teams could make further interventions, such as directing employees to alternative approved applications or blocking access entirely.

3. Data Loss Prevention

Data loss prevention tools can enlist AI to detect files, posts or emails containing potentially sensitive information, such as source code, passwords or intellectual property, and alert and block when these assets are leaving the organization in real time.

Amid the current generative AI hype, it’s a highly useful way of ensuring that posts containing sensitive information are not uploaded to third-party platforms that have no prior approval. This can be achieved by working in conjunction with real-time employee coaching, ensuring that employees are alerted of potential data misuse as it happens, e.g. as soon as sensitive information is detected.

4. Threat Detection

Another essential way to protect data is to ensure AI is enlisted to monitor and detect threats such as malware and ransomware while also reducing the attack surface. For large enterprises, AI-powered device intelligence along with borderless SD-WAN can proactively monitor the network and provide predictive insights for teams, preventing network issues before they happen.

AI can also be harnessed to detect and flag unusual behavior as part of a zero trust approach, where access from an unusual device or location is automatically made visible to network and security teams.

To conclude, businesses are fast evolving thanks to the introduction of AI into their organizations and shifting operations to be more streamlined and efficient; however, privacy and data protection remain critically important to ensure organizations can continue to safely utilize these advancements in technology while remaining compliant with existing and newly proposed regulations.

Data privacy is more than just a single day – it's every day, regardless of whether you are human or AI-powered.

What’s hot on Infosecurity Magazine?