OpenAI is stepping up its push to bolster the security framework surrounding its enterprise-focused AI ecosystem.
Recently, the AI giant has looked to address the need for agentic AI security testing through its acquisition of agentic security testing firm Promptfoo.
In a soon-to-be released interview with Infosecurity, OpenClaw’s security advisor flagged that such a security requirement existed within agentic AI development.
Jamieson O’Reilly, an Australian hacker, founder of pentesting company DVULN and security advisor at OpenClaw, a local AI agent project that went viral at the beginning of 2026, spoke to Infosecurity for an upcoming podcast episode.
Asked about the future for agentic AI security, O’Reilly warned that the AI and cybersecurity community needs to develop more ways to “scan AI tools” for detecting “human-language malware, rather than using traditional file-based malware analysis.”
A day after the interview, conducted on March 9, OpenAI announced it was acquiring Promptfoo in a bid to reinforce security measures for AI agents in enterprise applications.
Founded in July 2024 by Ian Webster, a senior engineering manager at Discord, and Michael D'Angelo, the VP of Engineering and head of machine learning at Smile Identity, Promptfoo addresses the security gap O’Reilly highlighted.
Specifically, the startup provides open source tools to test and evaluate large language models (LLMs) and AI agents. These include tools for scanning vulnerabilities in LLMs, red-teaming AI tools, evaluating AI prompts and models, and providing a secure proxy for model context protocol (MCP) servers, one of the building blocks of AI agents.
According to OpenAI’s March 10 announcement, Promptfoo’s suite of tools are used by over 25% of Fortune 500 companies.
The startup has raised $23m so far, including $18.4m from VC firm Insight Partners in July 2025 with participation from Andreessen Horowitz. According to its LinkedIn page, Promptfoo employs over 20 people.
No financial details about the acquisition were shared by either party.
OpenAI Acquires Promptfoo to Enhance AI Agent Security Testing
OpenAI said companies are increasingly deploying AI agents, which it calls “AI coworkers,” and Promptfoo can help offer “systematic ways to test AI agent behavior, detect risks before deployment and maintain clear records to support oversight, governance and accountability over time.”
Once the acquisition is approved, OpenAI will integrate Promptfoo’s technology directly into OpenAI Frontier, its platform for building and operating “AI coworkers.”
The company stated that security and safety testing would become built-in capabilities of the Frontier platform, with automated security testing and red‑teaming tools designed to help enterprises identify and remediate risks such as prompt injections, jailbreaks, data leaks, tool misuse and out‑of‑policy agent behaviors.
OpenAI also said that security and evaluation would be integrated into development workflows so organizations can identify, investigate and remediate agent risks earlier in the development process.
In addition, integrated reporting and traceability features will provide oversight and accountability, enabling organizations to document testing, monitor changes over time and meet emerging governance, risk and compliance expectations for AI.
Finally, the generative AI giant confirmed it will keep Promptfoo’s current product suite open source and available for anyone to use and deploy.
OpenAI’s Security Future Involves OpenClaw and Promptfoo
Speaking to Infosecurity about the acquisition, O’Reilly said it “made a lot of sense.” However, he added that he didn’t have enough context about Promptfoo and the acquisition to further comment.
Since being appointed OpenClaw’s security advisor, O’Reilly has worked on a security roadmap for the project. He also signed, on February 7, an agreement with Google-owned VirusTotal, to improve the security of OpenClaw-compatible skills shared on skills libraries such as ClawHub.
“While VirusTotal is known for more traditional binary-based malware analysis, they were the only ones besides ourselves who were seriously studying the abuse of skills marketplaces,” O’Reilly told Infosecurity.
He also highlighted the benefit of VirusTotal’s privileged access to Google AI Gemini to “scan human-language malware.”
A few days after the OpenClaw agreement with VirusTotal, Peter Steinberger, the founder of OpenClaw, announced on February 14 that he joined OpenAI.
While it remains unclear whether the Austrian software developer is taking the OpenClaw project with him to OpenAI, he confirmed to several media outlets that OpenClaw will move to a foundation and stay open and independent.
Speaking on the Lex Fridman podcast on February 12, Steinberger said he would like OpenClaw to follow a model similar to Google’s Chromium and Chrome, where an open‑source project (Chromium) is maintained by a company alongside outside contributors and serves as the foundation for commercial products such as Google Chrome, Microsoft Edge, Brave, Opera and Vivaldi.
Whatever happens, with Steinberger’s hiring and now the Promptfoo integration, as well as the recent rollout of Codex Security, a tool formerly known as Aardvark and designed to help developers identify and mitigate vulnerabilities in AI‑generated code, OpenAI seems to be moving more aggressively to build out the security infrastructure around its enterprise AI ecosystem.
Join us on Tuesday April 28 for the AI Security and Governance Virtual Summit
