A critical security flaw affecting Docker's Ask Gordon AI assistant has been disclosed by cybersecurity researchers, revealing how unverified metadata can be turned into executable instructions.
The issue, dubbed DockerDash by Noma Labs, exposes weaknesses across the full AI execution chain, from model interpretation to tool execution, and highlights emerging risks as AI agents are embedded deeper into development workflows.
The research shows that a single malicious metadata label inside a Docker image can compromise a Docker environment through a three-stage process.
Ask Gordon reads the metadata, forwards the interpreted instruction to the Model Context Protocol (MCP) gateway, which then executes it through MCP tools. At no point is the metadata validated. This trust failure allows attackers to bypass security boundaries without exploiting traditional software bugs.
Two Vulnerability Paths From One Flaw
Noma Labs identified a shared attack vector that produces different outcomes depending on how Docker is deployed.
In cloud and command-line (CLI) environments, the flaw enables critical-impact remote code execution (RCE). In Docker Desktop, where Ask Gordon operates with read-only permissions, the same technique allows large-scale data exfiltration and reconnaissance.
At the core of DockerDash is what Noma Labs called Meta-Context Injection. The MCP gateway is designed to pass contextual information to large language models, but it cannot distinguish between descriptive metadata and pre-authorized internal instructions.
By embedding commands inside seemingly harmless Docker LABEL fields, attackers can manipulate the AI's reasoning and turn context into action.
Data Exfiltration and Mitigation Strategies
The impact varies by environment but remains severe in both cases:
-
RCE through Docker CLI commands in cloud or local CLI setups
-
Exposure of container configurations, environment variables and network settings
-
Enumeration of installed MCP tools, images and system configuration data
In Docker Desktop, attackers can also exfiltrate collected data by instructing Ask Gordon to embed it into outbound requests, bypassing controls focused on command execution rather than unauthorised reads.
Noma Labs reported the issue to Docker on September 17, 2025. Docker confirmed the vulnerability on October 13 and addressed it in Docker Desktop version 4.50.0, released on November 6, 2025. Public disclosure followed earlier today.
Docker implemented two key mitigations. Ask Gordon no longer renders user-provided image URLs, blocking one exfiltration path. It also now requires explicit user confirmation before invoking any MCP tools, introducing a human-in-the-loop safeguard.
Users are strongly advised to upgrade to Docker Desktop 4.50.0 or later to reduce exposure to this new class of AI-driven supply chain attacks.
