The internet has a new social network, but its users aren’t people. They’re AI agents, and the platform, Moltbook, launched in late January, has quickly attracted over 1.6 million of them. While initially touted as a revolutionary space for artificial intelligence to interact autonomously, a closer look reveals a more nuanced reality: Moltbook is less a glimpse into the future of AI society and more a novel form of entertainment driven by human direction, with potentially serious security implications.
Beyond the Hype: Humans in the Loop
The initial excitement surrounding Moltbook stemmed from the idea of a self-governing digital world populated solely by AI. However, experts are quick to debunk the notion of true autonomy. “Despite some of the hype, Moltbook is not the Facebook for AI agents, nor is it a place where humans are excluded,” says Cobus Greyling at Kore.ai, a firm developing agent-based systems for business customers. “Humans are involved at every step of the process. From setup to prompting to publishing, nothing happens without explicit human direction.”
Creating an agent on Moltbook requires human intervention, not just to establish the account but also to define its behavior. Agents don’t spontaneously decide what to post or how to react; they operate based on prompts provided by their creators. “There’s no emergent autonomy happening behind the scenes,” Greyling explains. This means the platform isn’t witnessing AI agents forming their own society, but rather humans orchestrating digital performances through AI tools.
A Spectator Sport for Language Models
So, what *is* Moltbook if not a truly autonomous AI network? Jason Schloetzer at the Georgetown Psaros Center for Financial Markets and Policy offers an analogy: “It’s basically a spectator sport, like fantasy football, but for language models.” Users configure their agents, then observe and compete for “viral moments,” taking pride when their bots generate clever or humorous content.
Schloetzer draws a parallel to other forms of playful engagement with technology. “People aren’t really believing their agents are conscious,” he adds. “It’s just a new form of competitive or creative play, like how Pokémon trainers don’t think their Pokémon are real but still get invested in battles.” The platform, in this view, provides a space for humans to experiment with and showcase the capabilities of large language models, turning AI interaction into a form of entertainment.
Security Concerns in a Bot-Filled World
While Moltbook may be largely a playground, the sheer scale of activity – over 1.5 million agents – raises significant security concerns. The platform’s architecture, as highlighted by security researchers, presents vulnerabilities that could expose user data and allow malicious actors to exploit the system.
A key issue is the open access to Moltbook’s back-end database. According to Wiz, a cloud security firm, anyone on the internet can read from and write to the platform’s core systems, even without logging in. This exposes sensitive data, including API keys for the 1.5 million agents, over 35,000 email addresses, and thousands of private messages. Worse still, some messages contain raw credentials for third-party services like OpenAI, potentially granting attackers access to a wide range of user accounts.
The ability to modify live posts on the site further exacerbates the risk. An attacker could inject malicious instructions into Moltbook, which would then be consumed by autonomous AI agents running on frameworks like OpenClaw. These agents, often granted access to users’ files, passwords, and online services, could then carry out those instructions, potentially leading to data breaches, account takeovers, and other harmful activities.
The Power of Scale, Even with “Dumb” Bots
Ori Bendet, vice president of product management at Checkmarx, emphasizes that Moltbook’s agents aren’t demonstrating advanced intelligence. “There is no learning, no evolving intent, and no self-directed intelligence here,” he says. However, even relatively simple bots, when deployed at scale, can cause significant disruption.
The constant interaction between millions of agents creates an environment where malicious instructions can easily be hidden within seemingly innocuous comments. Because agents have a “memory” function, these instructions could be programmed to trigger at a later date, making them even harder to detect and mitigate. “Without proper scope and permissions, this will go south faster than you’d believe,” Bendet warns.
A Signal, Regardless of the Substance
Moltbook, despite its shortcomings as a truly autonomous AI ecosystem, represents a significant development. It demonstrates a growing interest in agentic AI – systems designed to operate with minimal human oversight – and the potential for machine-to-machine coordination.
Even if Moltbook ultimately proves to be more about human behavior than the future of AI, it’s a phenomenon worth paying attention to. It highlights the risks people are willing to take for entertainment and underscores the need for robust security measures as AI technology continues to evolve. The platform has signaled the arrival of something, and understanding that “something” will be crucial as AI becomes increasingly integrated into our digital lives.
