Security researchers have warned of a “critical, systemic” vulnerability in the model context protocol (MCP) which could have a significant impact on the AI supply chain.
MCP is a popular open source standard created by Anthropic which allows AI models to connect to external data and systems.
However, in a report published on April 15, researchers at Ox Security claimed that a flaw in the protocol could enable arbitrary command execution on any vulnerable system, handing attackers access to sensitive user data, internal databases, API keys, and chat histories.
“This is not a traditional coding error,” warned the vendor.
“It is an architectural design decision baked into Anthropic’s official MCP SDKs across every supported programming language, including Python, TypeScript, Java, and Rust. Any developer building on the Anthropic MCP foundation unknowingly inherits this exposure.”
It said that over 200 open source projects, 150 million downloads, 7000+ publicly accessible servers and up to 200,000 vulnerable instances in total could be exposed by the vulnerability.
Read more on MCP: Hundreds of MCP Servers at Risk of RCE and Data Leaks.
According to Ox Security, the exploit mechanism is fairly straightforward.
“MCP’s STDIO interface was designed to launch a local server process. But the command is executed regardless of whether the process starts successfully,” it explained. “Pass in a malicious command, receive an error – and the command still runs. No sanitization warnings. No red flags in the developer toolchain. Nothing.”
In effect, this could result in complete takeover of a target’s system.
Who’s to Blame?
Ox Security said it has repeatedly tried to persuade Anthropic to patch the vulnerability. However, according to the report, the AI giant said that this was “expected behavior.”
“Anthropic confirmed the behavior is by design and declined to modify the protocol, stating the STDIO execution model represents a secure default and that sanitization is the developer’s responsibility,” Ox Security said.
The company argued that pushing responsibility onto developers for securing their code, instead of securing the infrastructure it runs on, is dangerous given the community’s track record on security.
In the meantime, Ox Security has issued over 30 responsible disclosures and discovered over 10 high or critical-severity CVEs, to help patch individual open source projects.
Kevin Curran, IEEE senior member and professor of cybersecurity at Ulster University, said the research exposed “a shocking gap in the security of foundational AI infrastructure” and that the researchers did the right thing.
“We are trusting these systems with increasingly sensitive data and real-world actions. If the very protocol meant to connect AI agents is this fragile and its creators will not fix it then every company and developer building on top of it needs to treat this as an immediate wake-up call,” he added.
