The rising use of generative AI tools like Large Language Models (LLMs) in the workplace is increasing the risk of cyber-security violations as organizations struggle to keep tabs on how employees are using them.
One of the key challenges IT and security teams are facing is the continued use of Shadow AI, when employees use their personal accounts - such as ChatGPT, Google Gemini, and Microsoft Copilot – at work.
According to Netskope’s Cloud and Threat Report for 2026, nearly half (47%) of people using generative AI tools in the workplace are using personal accounts and applications to do so.
With this comes a lack of visibility or controls over how employees are using these personal generative AI accounts at work.
The result is an increase in cyber-security risks and issues with data-policy violations, with the risk of sensitive corporate information being leaked.
Meanwhile, the number of prompts being sent to generative AI applications are on the rise.
"While the number of users tripled on average, the amount of data being sent to SaaS gen AI apps grew sixfold, from 3,000 to 18,000 prompts per month. Meanwhile, the top 25% of organizations are sending more than 70,000 prompts per month, and the top 1% are sending more than 1.4 million prompts per month,” said Netskope in its report.
Generative AI Data Policy Violations Average 223 Per Month
This increase in data being sent to AI tools creates additional security risks for organizations; according to Netskope the number of known data policy violations as a result of employees using generative AI and LLMs has doubled in the last year – and it is likely that this is an underestimation, given how organizations are struggling to monitor usage of shadow AI.
"In the average organization, both the number of users committing data policy violations and the number of data policy incidents has increased twofold over the past year, with an average of 3% of gen AI users committing an average of 223 gen AI data policy violations per month,” said the report.
It also warns that the more enthusiastic organizations and their employees are about using AI applications and services, the higher the risk of a data policy violation – the top 25% of organizations for using generative AI saw an average of 2100 incidents a month.
These incidents involve sensitive data being sent to AI tools – including source code, confidential data, intellectual properly and even login credentials – leading to an increase in accidental data exposure and compliance risks.
This is especially the case if employees are using their personal accounts, which without proper procedures put in place, security and might not even be aware are being used.
There’s also the risk that attackers take advantage of information being entered into LLMs, using carefully curated prompts to draw out sensitive information they can use either in its own right, or to help make targeted campaigns more customized and efficient.
As the use of generative AI and LLMs continues to grow, organizations need to ensure they have effective policies in place to maximise visibility of AI tool usage across the network, as well as educating employees on what constitutes risky use of AI.
“The combination of the surge in data policy violations and the high sensitivity of the data regularly being compromised should be a primary concern for organizations that haven’t taken initiatives to bring AI risk under control,” said Netskope.
“Without stronger controls, the probability of accidental leakage, compliance failures, and downstream compromise continues to rise month over month.”
While data policy violations via generative AI remain a significant risk, it appears organizations are starting to take notice: the percentage of employees using personal AI accounts in the workplace has dropped from 78% down to 47% compared with the previous twelve months, suggesting data governance policies are starting to clamp down on the use of Shadow AI.
