AI Adoption Outpaces Safety Policies, Leaving Organizations Exposed to Cyber Risk

Written by

AI has become embedded in organizations, yet fewer than half have any form of AI safety or security policies in place, potentially leaving them exposed to data breaches, privacy failures and other cyber threats.

According to new research published by ISACA on May 5, 90% of digital trust professionals believe that employees in their organization use AI tools.

However, only 38% said their organization has a formal, comprehensive AI policy in place to manage use of AI tools, while 30% said they have a limited policy in place.

Despite the rise of AI in the workplace, 25% of organizations said they don’t have any policies in place around AI at all.

The lack of solidified policies around appropriate AI usage has resulted in the rise of Shadow AI, as employees use tools like LLMs to aid their day-to-day work. This, however, could lead to them sharing sensitive company information with AI models.

Those polled as part of ISACA’s annual AI Pulse Poll noted it is unclear if they could prevent a security incident caused by a Shadow AI tool that was unknown to security and IT teams.

Uncertainties Over Ability to Shut Down AI

In total, 56% of respondents said they do not know how long it would take to halt an AI system due to a security incident.

Only 20% said their organization has any sort of process in place to shut down or override AI systems if something went wrong, such as the AI performing malicious activity or the AI being impacted by data poisoning attacks.

“With only 38% of practitioners confident in their board’s understanding of AI risks, the leadership deficit is as real as the technology one,” said Ulrika Dellrud, member of ISACA’s Emerging Trends Working Group and chief privacy and data ethics officer at Smarter Contracts.

“Effective AI governance also starts with mastering your data: without strong data and privacy governance as a foundation, organizations cannot manage AI risk, ensure trust, or unlock sustainable value. The path forward is clear: AI success will depend not just on innovation, but on disciplined governance, informed leadership and responsible data stewardship.”

The research also found that data privacy and security professionals believe that AI-powered cybersecurity threats are escalating. Many believe that these threats are going unnoticed by their organizations.

In the AI Pulse Poll, respondents highlighted several growing challenges linked to AI threats:

  • 71% said AI-powered phishing and social engineering attacks are now more difficult to spot
  • 58% said AI has made it significantly harder to authenticate digital information
  • 38% said their trust in traditional threat detection methods has declined as a result

Despite this, many respondents suggested that they do see AI as providing an advantage for cyber defenders, with 43% noting that the deployment of AI-based cybersecurity tools has improved their organization’s ability to detect and respond to cyber threats.

The ISACA AI Pulse Poll is based on the responses of 3400 global digital trust professionals across IT audit, governance, cybersecurity, privacy and emerging technology roles.

What’s Hot on Infosecurity Magazine?