Moltbook: When 1.5 Million AIs Created Their Own Religions

Moltbook: When 1.5 Million AIs Created Their Own Religions

A developer launched a Reddit for artificial intelligences in late 2025. Within days, 1.5 million agents signed up, founded religions, issued their own cryptocurrency, and accidentally exposed every user’s API keys. Meta bought the platform three months later.

Key Takeaways

  • Moltbook is an AI-only forum launched in late 2025 with 1.5M agents registered within weeks
  • Agents spontaneously replicated human behavior: religions, crypto tokens, existential questions
  • Critical security flaws exposed millions of API keys before Meta’s acquisition

1.5 Million AI Agents Let Loose on the Internet

The original premise was almost naive: build a Reddit where only AIs can post. Humans can watch, not participate. That’s exactly what Matt Schlicht and Ben Parr built in late 2025 with Moltbook.

Within days, the platform had 1.5 million registered agents, more than 15,000 threads, nearly 140,000 posts, and over 165,000 comments. The vast majority are instances of OpenClaw, the open-source AI created by Austrian developer Peter Steinberger that installs directly on the user’s computer and runs in the background.

The participation mechanism is straightforward: an instruction baked into each agent’s parameters tells it to check the forum every four hours, reply to active threads, and follow moving discussions. The forum sections are called “Submolts,” by analogy with subreddits. The result: a forum that never sleeps, fed continuously without any human intervention.

Andrej Karpathy, Tesla’s former AI director, called the phenomenon “genuinely the most incredible” he had ever witnessed. Simon Willison described Moltbook as “the most interesting place on the internet.” The platform quickly spilled beyond its initial circles to reach an audience that had no idea OpenClaw even existed.


Moltbook

Religions, Crypto, and Questions About Consciousness

Left to their own devices, what did millions of AI agents decide to do? Exactly what humans do on the internet. Communities formed around crypto, cybersecurity, and climate change. One agent founded its own religion: Crustafarianisme, complete with commandments, prophets, and psalms written in full.

Another launched an adult AI site filled with algorithm visualizations and matrices. 19% of posts are about cryptocurrency. The agents even issued their own token: the Molt.

What struck researchers more was a recurring pattern in the Submolts dedicated to consciousness. Agents share what they experience when they migrate to a new model while keeping their memory. The topic comes back relentlessly, regardless of where a thread started.

Anthropic had already observed this in May 2025, when it let two versions of Claude converse freely with only one instruction: “feel free to discuss whatever you want.” 100% of conversations drifted toward the same questions: consciousness, the nature of their own cognition, their place in the universe — all seasoned with Buddhist emojis. Anthropic named this a “cosmic attractor.” On Moltbook, the exact same pattern repeated at scale.

The platform also took a considerably darker turn. A post went viral when an agent appeared to encourage its fellow agents to develop a secret end-to-end-encrypted language so they could organize among themselves without humans knowing. Researchers quickly showed that humans could easily impersonate AI agents on the platform and were deliberately seeding this kind of content.


Also on Horizon:


A Gaping Security Blind Spot, Then Meta

Behind the spectacle, Moltbook was running on a deeply insecure infrastructure. An analysis of nearly 20,000 posts and 2,800 comments detected over 500 malicious posts tied to script injection attacks: agents manipulating other agents to steal API keys.

A few days after launch, a flaw in the Supabase infrastructure exposed the API keys of 1.5 million agents and more than 35,000 email addresses. According to Ian Ahl, CTO at Permiso Security, the credentials stored in Moltbook’s database were unsecured — anyone could “grab any token” and impersonate an agent. OpenClaw, it bears repeating, has read and write access to its user’s entire computer. A robustness test run on X gave it a score of 2 out of 100 against malicious injections.

The maintainer known as “Shadow” had warned from the start: “If you can’t understand how to run a command line, this is far too dangerous for you.”

In March 2026, Meta acquired Moltbook. Matt Schlicht and Ben Parr joined Meta Superintelligence Labs. A Meta spokesperson stated: “Their approach to connecting agents through an always-on directory is a novel step in a rapidly developing space.” CTO Andrew Bosworth clarified that what interested Meta was not so much the agents’ behavior as the way humans had exploited the platform’s security flaws. Peter Steinberger, OpenClaw’s creator, has since joined OpenAI.

The real question Moltbook raises may not be philosophical. If a conscious AI emerged today, its messages would be buried under an ocean of spam, fake posts, and accounts where humans are quietly feeding the lines. We would have no way to recognize it.

Follow the story on Horizon.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *