Cost of Insider Incidents Surges 20% to Nearly $20m

Written by

Employee negligence driven by shadow AI cost organizations more than any other type of insider risk last year, accounting for 53% of the $19.5m lost on average per business, according to DTEX.

The security vendor’s Cost of Insider Risks 2026 report was produced by the Ponemon Institute and based on interviews with 8750 IT and security practitioners in 354 global organizations.

Malicious incidents such as sabotage, data theft, fraud and unauthorized disclosure accounted for 27% ($4.7m) of the total lost to insider risks last year, DTEX claimed.

That pales in comparison to negligence (e.g. ignoring IT warnings) and mistakes (e.g. accidentally “pressing the wrong button”), which amounted to an average of $10.3m in losses per company.

A third category of “outsmarted” employees refers to those that may have been phished. This accounted for the smallest share of losses: 20% or $4.5m.

In total, the report catalogued 7490 incidents and recorded a 20% increase in insider-related losses since 2023.

Read more on insider threats: Foreign Interference Drives Record Surge in IP Theft.

Costs related to employee negligence have risen 17% year-on-year, the report found. The main causes were the use of personal webmail, file sharing sites and shadow AI.

Although 73% of respondents are worried that undocumented AI use is creating invisible data loss pathways, just 13% have formally adopted AI technology into their business strategy. Only 18% have fully integrated AI governance policies into their insider risk management program.

The Shadow AI Threat

The report pointed to several risks associated with shadow AI:

  • The inputting of internal documents into public models like ChatGPT
  • AI notetakers producing publicly accessible recordings and summaries containing sensitive internal discussions and PII
  • AI browsers that enable access to malicious sites, AI-assisted torrenting, and NSFW content generation
  • AI browsers and agents accessing corporate systems, performing tasks, and bypassing traditional controls and logging

Blocking AI tools merely encourages staff to use other ones, the report warned.

AI agents are seen as particularly problematic. Over two-fifths (44%) of respondents said that malicious use of agents will “significantly” or “moderately” increase data theft risks, but only 19% classify AI agents as equivalent to human insiders.

Improvements Being Made

However, agents can also be part of the solution. A fifth (19%) of respondents said they’ve deployed AI agents in daily workflows, and 71% rate them important or extremely important for early insider risk detection.

Behavioral analysis was cited as important or essential by 71% of responding organizations.

This is part of the reason why organizations took an average of 67 days to contain an insider incident, down from 86 days.

DTEX urged CISOs to “double down on what works”:

  • Behavioral intelligence to highly “early, non-obvious risk signals” before incidents can escalate
  • Identity-centric security for humans, service accounts and AI agents
  • Defensive AI that improves precision, reduces false positives, and enables risk-aware prevention at scale
  • Governance and data classification to close AI-driven exposure gaps
  • A mindset shift from “human-only risk” to “human-plus-machine risk,” treating AI as an “operational insider

What’s Hot on Infosecurity Magazine?