Shadow AI: One In Four Employees Use Unapproved AI Tools, Research Finds

Written by

Shadow AI is emerging as one of the top forms of shadow IT, a new 1Password report has revealed.

The unauthorized use of AI tools was found to be the second-most prevalent form of shadow IT, ranking only behind email, according to 1Password’s 2025 Annual Report, published on October 30.

Overall, workers are broadly encouraged by their company to use AI as part of their workloads and the 1Password report found that of 5000 workers surveyed 73% said their company is in favor of such experimentation.

However, 37% admitted they do not always follow their company’s AI policies when using AI tools. Worse, 27% of employees recognized having worked with AI tools that had not been authorized by their company.

This number is still much lower than general shadow IT, the report said, with 52% of employees admitting they have downloaded apps without IT approval.

Shadow AI has been  described by 1Password as an even more pervasive practice than general shadow IT as these tools “can absorb sensitive information into their training data, violate legal and compliance mandates or function as outright malware.”

Read now: Why Shadow AI Is the Next Big Governance Challenge for CISOs

Generative AI Fuels Innovation Appetite

Speaking at a CISO roundtable during a launch event for the 1Password report, Mark Hazleton, CSO for Formula One racing team Oracle Red Bull Racing, explained the rise of shadow AI was partly due to productivity gains being a top priority for most employee when adopting new tools.

He said that workers are “focused on getting the job done, so if we try and restrain them, they will find a way to do what they need to do.”

“In F1, if somebody comes up on a Saturday night with a mechanism that’s going to save a second in the race on Sunday, we want to enable them to go forward with it,” he said.

The 1Password report found almost half of respondents justified their ‘shadowy’ use of AI tools because of their convenience (45%) and almost as many said they feel more productive when using AI (43%).

Breakdown of reasons to use AI without IT approval recorded in the 1Password 2025 Annual Report. Source: 1Password
Breakdown of reasons to use AI without IT approval recorded in the 1Password 2025 Annual Report. Source: 1Password

Hazleton also noted that the emergence of generative AI tools has ignited an unheard-of appetite for innovation within the workforce.

Susan Chiang, CISO at healthcare firm Headway, added, “Adoption of third-party software recently expanded a lot, but this expansion did not necessarily come with increased awareness of what the potential impact and risks are.”

Shadow AI vs. Shadow IT: How Freemium AI Tools Expand Risk

Shadow AI stands out from general shadow IT because of the diverse range of tasks employees use AI for.

The 1Password report showed that these range from customer call notes to transcribe and summarize (22% of respondents said they use AI for this) to performance reviews of hiring processes (16% of employees use AI in such a way).

AI tools are also leveraged for various data analytics use cases, with 16% of respondents using AI to analyze company data and 21% to analyze customer data.

Breakdown of the AI use cases recorded in the 1Password 2025 Annual Report. Source: 1Password
Breakdown of the AI use cases recorded in the 1Password 2025 Annual Report. Source: 1Password

Chiang explained the rise of shadow AI is connected to the model adopted by general-purpose generative AI tools early on.

“Generative AI made the freemium model popular again – and you can already do a lot with a free large language model (LLM) tool, for instance,” she started. “However, while a lot of employees understand the concept of contracts and risks, they don’t necessarily think risk management policies apply to free products.”

While the web-based, freemium app approach is prominent with generative AI tools, Brian Morris, VP and CISO at Gray Media, said the same conclusion applies to many shadow IT practices outside of AI tools.

“The real number of employees using shadow IT is probably much higher than 52% because we’re not just talking about downloading apps – people use web apps like Grammarly and Monday all the time that expose company data. But because they work through the browser, they don’t really think of them as apps,” Morris explained.

AI Governance Best Practices

To overcome some of these AI blind spots, the 1Password report recommended a three-step approach for AI governance:

  1. Maintain a complete inventory of AI tools in use at your organization and conduct regular audits
  2. Establish clear policies, enforce appropriate AI usage and guide users toward safe tools and behaviors
  3. Invest in controls to ensure only company-sanctioned AI tools can access company data

During the CISO roundtable, Headway’s Chiang also recommended risk-based security teams to not only focus on the highest identified risks, but to also spend time addressing low-to-medium risks that can be quickly addressed so that they don’t get overwhelmed by many issues at the same time.

“When it comes to AI, we talk a lot of ‘death by 1000 cuts,’ with many low to medium risks that are worth investing in and could be easily resolved by implementing education and awareness processes,” she explained.

The 1Password 2025 Annual Report is based on an online survey distributed by PureSpectrum among 5200 knowledge workers in Canada, France, Germany, Singapore, the UK and the US.

What’s Hot on Infosecurity Magazine?