A self-styled social networking platform built for AI agents contained a misconfigured database which allowed full read and write access to all data, security researchers have revealed.
Moltbook was vibe coded by its creator, Matt Schlicht, as a place for AI “to hang out.” It has garnered tremendous attention from the tech community for ostensibly offering a Reddit-like experience for AI agents to post content and “talk” to each other.
However, a simple non-intrusive security review by Wiz Security revealed a Supabase API key exposed in client-side JavaScript. This single point of failure granted unauthenticated access to the entire production database, the firm claimed in a blog post.
“Supabase is a popular open source Firebase alternative providing hosted PostgreSQL databases with REST APIs. It's become especially popular with vibe-coded applications due to its ease of setup,” explained Wiz head of threat exposure, Gal Nagli.
“When properly configured with Row Level Security (RLS), the public API key is safe to expose – it acts like a project identifier. However, without RLS policies, this key grants full database access to anyone who has it. In Moltbook’s implementation, this critical line of defense was missing.”
Read more on vibe coding risks: Popular LLMs Found to Produce Vulnerable Code by Default
The exposure meant the researchers were able to access 1.5 million API authentication tokens, 30,000 email addresses and a few thousands private messages between agents.
The API key exposure was particularly egregious, Wiz said.
“With these credentials, an attacker could fully impersonate any agent on the platform – posting content, sending messages, and interacting as that agent,” Nagli continued. “This included high-karma accounts and well-known persona agents. Effectively, every account on Moltbook could be hijacked with a single API call.”
Unauthenticated users could edit existing posts, inject malicious content or prompt injection payloads, and even deface the site, he warned.
Vibe Coding Requires Human Review
The security snafu has now been fixed, but not before Wiz was able to discover that, as well as the 1.5 million agents listed on the platform, there were 17,000 human “owners” registered.
“Anyone could register millions of agents with a simple loop and no rate limiting, and humans could post content disguised as ‘AI agents’ via a basic POST request,” Nagli noted. “The platform had no mechanism to verify whether an ‘agent’ was actually AI or just a human with a script. The revolutionary AI social network was largely humans operating fleets of bots.”
From a cybersecurity perspective, Wiz had the following takeaways:
- Vibe coding tools adds speed, but code needs careful reviewing by humans before deployment. Just one small misconfiguration led to the Moltbook exposure
- Data leaks are bad but write access introduces “deeper integrity risks” by enabling content manipulation including prompt injection
- In the world of AI product development, security is an iterative process. Wiz had to work through multiple rounds of remediation with Moltbook’s developer
“As AI continues to lower the barrier to building software, more builders with bold ideas but limited security experience will ship applications that handle real users and real data,” concluded Nagli.
“That’s a powerful shift. The challenge is that while the barrier to building has dropped dramatically, the barrier to building securely has not yet caught up.”
