As enterprise AI adoption surges, from autonomous email processing to AI-driven workflow automation, security leaders face a new reality: AI agents are now insiders. These agents have access to sensitive data, third-party systems and decision-making authority.
Yet most organizations still treat them as unmanaged assets rather than high-risk identities subject to the same security controls as human workers.
That gap is a growing concern for AI governance experts like Meghan Maneval, director of community and education at Safe Security and a key contributor to ISACA’s Advanced AI Security Management (AAISM) certification.
Speaking to Infosecurity at the ISACA Europe 2025 conference in London, Maneval argued that mandatory security awareness training must extend to AI agents – just as it does for employees.
“It may not be as candid as what humans would do during those sessions, but AI agents used by your workforce do need to be trained. They need to understand what your company policies are, including what is acceptable behavior, what data they're allowed to access, what actions they're allowed to take,” she explained.
During her talk at ISACA Europe 2025, Maneval extended her insights beyond AI agents to all enterprise AI tools, outlining a best-practice framework for AI auditing.
Infosecurity selected some of her key recommendations, drawn from both an exclusive interview at ISACA Europe 2025 and her subsequent presentation on AI governance and auditing best practices.

The Five Commandments of AI Auditing
Write Everything Down
When people reach out to Maneval for advice on launching an AI auditing program, she generally starts with the same recommendation: write everything down.
“I kind of joke, but I do tell people to start by writing everything down, build and inventory, a list of the AI tools used and how they are used,” she told Infosecurity.
This can go from a simple list of critical application inventory listed on a piece of paper to a machine-readable software bill of materials (SBOM) or AI bill of materials (AIBOM) depending on the organization’s maturity.
She said that this inventory could be built using response from an employee survey, asking every employee how they are using AI.
“Inevitably, people are using it and potentially downloading software off the internet without any company oversight,” she said.
These inventories should ideally go beyond the organization’s workforce and include third-party systems, including how partners (suppliers, software providers, clients, etc.) are using AI.
“If you already have third party listing with all the companies you deal with and all the applications they use, you should ask them how they use AI in both internal use cases and customer-facing ones,” she said.
Don’t Start with Checkboxes
While listing applications, systems and use cases is a critical first step, Maneval also warned against limiting AI audit programs to checkbox exercises only.
“I think we all started the audit journey as a checkbox activity and now, with experience, are all saying it shouldn’t be. Don't start with checkboxes. You have to be intentional,” she said during her talk at ISACA Europe 2025.
By being intentional, she explained that audit managers should understand not only what AI is used for, but how it is used.
This means actively examining the AI’s inner workings:
- Understanding what machine learning algorithms and AI models are underlying the user-friendly AI interfaces
- Validating the training data which these models have been trained on
- Identifying the potential biases and weaknesses in these datasets
- Understanding how each AI tool is making decisions, from answering questions (AI chatbots, AI assistants) to acting on behalf of the user (AI agents)
“Most AI tools are just trained to do the same thing over and over and so it means decisions are based on assumptions from limited information,” she explained to Infosecurity.
“Additionally, most AI tools solve real problems but also create real risks and each solve different problems and creates different risks.”
While some cybersecurity experts argue that auditing AI tools is no different to auditing any other software or application, Maneval disagrees.
“With AI, you're not starting from scratch, but it's also not a carbon copy. There's this grey space in the middle where you want to look at what you're already doing – you probably already have some policies on things like data protection, third-party risk management, preferred encryption methods. You can apply some of those to AI,” she said.
“But then there will be some extra verifications you will have to do. Those are the gaps that a lot of people don't know yet. Identifying those gaps are where people are going to have to focus.”
You probably do a background check before anyone is hired. Do the same thing with your AI agent.
Maneval also recommended looking at existing policies regarding the organization’s risk tolerance in order to gauge which risks posed by AI tools and AI agents should be mitigated in priority.
“Some companies may take on more risk. My company, for instance, strongly encourages us to use AI. Other companies might be more conservative or want more policies. The bottom line is that you have to start with what are you already doing, and what are you willing to accept and that turns into your policy statement, which you can then start to build controls,” she said.
Read more: AI-Driven Social Engineering Top Cyber Threat for 2026, ISACA Survey Reveals
Treat Any AI Application like A Human
Speaking to Infosecurity, Maneval’s said her “rule of thumb” is that whether you’re dealing with traditional machine learning algorithms, generative AI applications of AI agents, “treat them like any other employees.”
This not only means that AI-powered agents should be trained on security policies but should also be forced to respect security controls that the staff have to respect, such as role-based access controls (RBAC).
“You should look at how you treat your humans and apply those same controls to the AI. You probably do a background check before anyone is hired. Do the same thing with your AI agent. You know that humans can't go and do whatever they want across your network and that their navigation within your system is bound by zero trust controls. Well, neither should that AI agent. Maybe that means this AI agent can't do anything without secondary approval, for instance,” she told Infosecurity.
Combine Monitoring Techniques
Maneval advocated for combining several AI monitoring techniques, especially:
- System log analysis
- Behavioral analysis
- AI drift detection
- Anomaly detection
AI drifting is the gradual decline in an AI model's performance over time due to changes in real-world data, user behavior or environmental factors, causing its predictions or decisions to become less accurate or irrelevant.
“This is just like a store tracking four cartons of milk but never checking if they’re spoiled, Maneval said during her talk.
“AI systems often monitor outputs (e.g. stock levels) without assessing real-world usage or quality. Without proper thresholds, alerts and usage logs, you’re left with data that exists but isn’t actually useful – just like milk no one wants to drink. This is why combining different monitoring technique is critical.”
Read now: How Security Teams Can Manage Agentic AI Risks
Build Controls – And Audit Them Too
During her session at ISACA Europe 2025, Maneval emphasized that an ideal AI audit program should not only audit the AI and its underlying technology and training data, but also the outputs provided by the AI tool and the controls built on top of the tool.
She said AI algorithm audits should evaluate “the model’s fairness, accuracy and transparency,” while AI output audits should “look out for red flags, such an incorrect information, inappropriate suggestion or sensitive data leaks.”
Finally, Maneval recommended any AI auditor to evaluate security guardrails, access controls and data leak protection built around the AI tools or embedded into the fine-tuned model used by the organization.
“Auditing AI isn’t about calling someone out, it’s about learning how the system works so we can help do the right thing,” she concluded.
Conclusion: AI Tools, Insiders of a Different Nature
Because AI agents – and by extension, any AI-powered tools – are now insiders and have access to sensitive data, third-party systems and decision-making authority, just like any other staff members, Meghan Maneval’s AI audit approach is focused on treating AI like any other staff member rather than as unmanaged assets.
Watch now: Mastering Identity and Access for Non-Human Cloud Entities
