By 2030, more than 40% of global organizations will suffer security and compliance incidents due to the use of unauthorized AI tools, Gartner has predicted.
The analyst said a survey of cybersecurity leaders earlier this year revealed that 69% have evidence or suspect that employees are using public generative AI (GenAI) at work.
It warned that such tools can increase the risk of IP loss, data exposure and other security and compliance issues. These should be well understood by now. As far back as 2023, Samsung was forced to ban the use of GenAI internally after staff shared source code and meeting notes with ChatGPT.
“To address these risks, CIOs should define clear enterprise-wide policies for AI tool usage, conduct regular audits for shadow AI activity and incorporate GenAI risk evaluation into their SaaS assessment processes,” said distinguished VP analyst Arun Chandrasekaran.
Gartner’s findings chime with several similar studies.
Last year, Strategy Insights reported that over a third of organizations in the US, UK, Germany, the Nordics and Benelux have faced challenges monitoring for unauthorized AI use. The same year, RiverSafe claimed that a fifth of UK firms have had potentially sensitive corporate data exposed via employee use of GenAI.
Separately, 1Password revealed last month that 27% of employees have worked with non-sanctioned AI tools.
Read more on shadow AI: Over a Third of Firms Struggling with Shadow AI
Technical Debt Mounts
Even legitimate use of GenAI could have unintended consequences, Gartner warned.
The analyst predicted that by 2030 50% of enterprises will face delayed AI upgrades and/or rising maintenance costs due to unmanaged technical debt associated with GenAI usage. Delayed upgrades in particular can create security risks if not properly managed.
“Enterprises are excited about GenAI’s speed of delivery. However, the punitively high cost of maintaining, fixing or replacing AI-generated artifacts such as code, content and design, can erode GenAI’s promised return on investments,” said Chandrasekaran.
“By establishing clear standards for reviewing and documenting AI-generated assets and tracking technical debt metrics in IT dashboards, enterprises can take proactive steps to prevent costly disruptions.”
The analyst also warned about ecosystem lock-in and the erosion of skills that could result from over-eager use of GenAI.
“To prevent the gradual loss of enterprise memory and capability, organizations should identify where human judgment and craftsmanship are essential, designing AI solutions to complement, not replace, these skills,” Chandrasekaran said.
He added that CIOs should prioritize open standards, open APIs and modular architectures when designing their AI stack, in order to avoid over-dependence on a single vendor.
