The Risk of Accidental Data Exposure by Generative AI is Growing

Written by

Businesses are exploring how to balance the benefits of AI with the associated risks. Against this backdrop, Netskope Threat Labs have recently released the latest edition of its Cloud and Threat Report focused on ‘AI Apps in the Enterprise.’

The report examines the risks from AI apps that include the increased attack surface for the enterprise, something I have already described in a previous blog post, and the accidental sharing of sensitive information.

Given all the hype and interest in the media, it is unsurprising the report found that the number of users accessing AI apps in the enterprise is growing exponentially. With that, the risk of accidental exposure of internal information is growing too. According to the study, during May and June 2023, the percentage of enterprise users accessing at least one AI app each day increased by 2.4% weekly, a total increase of 22.5% over the period. 

ChatGPT is the most popular enterprise AI app, with more than eight times as many daily active users as any other AI app. Organizations with more than 1000 users utilized, on average, three different AI apps per day, while organizations with more than 10,000 users used, on average, five AI apps per day, with one out of 100 enterprise users interacting with an AI app each day.

This rapid growth is largely driven by the potential AI apps have to provide multiple benefits to the enterprise in terms of productivity and competitive advantage. Applications like ChatGPT can be used for multiple purposes, such as reviewing source code for security flaws, assisting in editing written content and making better data-driven decisions. 

But in embracing the generative AI app era, organizations and IT leaders are facing an age-old dilemma: what are the acceptable costs or trade-offs in terms of security in exchange for the benefits that generative AI promises?

Source Code is the Most Frequently Exposed Type of Sensitive Data

When using AI apps, the risk of accidentally sharing sensitive information or intellectual property is a significant issue. It found that an organization can expect around 660 daily prompts to ChatGPT for every 10,000 users, with source code being the most frequently exposed type of sensitive data, posted by 22 out of 10,000 enterprise users and generating, on average, 158 incidents monthly. This is ahead of regulated data (on average, 18 incidents), intellectual property (on average, four incidents), and posts containing passwords and keys (on average, four incidents) every month.

Image credit: Iryna Imago / Shutterstock.com
Image credit: Iryna Imago / Shutterstock.com

Therefore, it is no surprise that Samsung decided to ban its employees’ use of generative AI apps (and develop its own AI application) in May 2023 after some users accidentally leaked sensitive data via ChatGPT. 

Even when users are not the source of the leak, the platform itself is not immune from security flaws. For example, at the end of March 2023, OpenAI, the company behind ChatGPT, provided details of a data breach caused by a bug in an open-source library, forcing it to temporarily take the generative AI app offline. The data breach exposed some customers’ payment-related information and allowed titles from some active users’ chat history to be viewed.

However, since very few companies have the resources to follow Samsung and develop their own AI tools in-house, it is necessary to find the right trade-off between the risks and the opportunities for the enterprise.

Putting Controls in Place to Safely Enable Generative AI Applications

Organizations should put in place specific controls around ChatGPT and other generative AI applications, and the trends in the level of intervention vary significantly by industry vertical. In general, highly regulated industries, such as financial services and healthcare, prefer a more conservative approach where nearly one in five organizations (18%) enforce a complete block (i.e. no users are allowed to use generative AI apps). 

This conservative approach is adopted by only one out of 20 organizations (4.8%) in more entrepreneurial verticals, such as technology which, in contrast, prefer a more granular approach based on DLP controls to detect if specific types of sensitive information are posted to ChatGPT and similar apps such as source code, PPI and so on. These sectors also put the role of the users and their accountability at the centre of the security process, implementing real-time user coaching in 20% of cases to remind users of the company policy and the risks related to AI apps (an occurrence that clearly did not happen in case of Samsung).

Based on granular DLP policies and real-time user coaching, an increasing number of industries will likely adopt this approach as they look to unleash the full potential of AI apps and mitigate related risks.

How to Adopt AI Apps Securely

With every wave of technology transformation, security and IT leaders struggle to balance security with the urgency to innovate. However, adopting AI apps in the enterprise involves many core concepts that IT teams should be familiar with, namely identifying permissible apps and implementing controls that empower users to use them to their fullest potential while safeguarding the organization from risks. Some general recommendations of good practice include:

  • To regularly review AI app activity, trends, behaviors and data sensitivity to identify risks to the organization.
  • Block access to apps that do not serve legitimate business purposes or pose a disproportionate risk.
  • Use DLP policies to detect posts containing potentially sensitive information, including source code, regulated data, passwords and keys and intellectual property.
  • Employ real-time user coaching (combined with DLP) to remind users of company policy surrounding the use of AI apps during interaction.
  • Ensure that all security defences share intelligence and work together to streamline security operations.

Armed with these basic principles of cloud security, companies should be able to test and experiment with these AI apps with the confidence that their users are not unwittingly exposing proprietary corporate IP. 

Additionally, it allows companies to test different implementations of AI apps beyond the mainstream applications, such as text and image generation, to more nuanced use cases that could unlock significant business efficiencies. 

What’s hot on Infosecurity Magazine?