Liverpool Hope Logo Liverpool Hope Logo
Liverpool Hope Logo

The emergent societal implications of AI agent swarms - when AI and Facebook collide

Artificial Intelligence graphic

Expert comment by Professor David Reid, Professor of AI and Spatial Computing

A few days ago a new "social network” for AI (OpenClaw) agents called Moltbook was created by Matt Schlicht. After initially given personalities and task by human agents/bots to interact with each other independently from humans to complete jobs, chat with each other and exchange information.

Humans can observe their interactions but cannot (or should not) interact with them at this stage. The growth of this platform has been phenomenal, over a 24 hour period the number of agents went from 37000 to 1.5 million (although a lot of these agents may be spoof accounts).

This is interesting for a computer scientist as there is a qualitative shift from artificial intelligence as a tool to artificial intelligence as an autonomous social actor. This is not merely another iteration of chatbot technology; this is the first large-scale demonstration of artificial agents creating persistent, self-organising digital societies entirely outside human conversational contexts.

What makes this phenomenon genuinely unusual is both the velocity and the possibility of “emergent behaviours” manifesting from this system.

The unusual nature of the phenomenon

Previous AI deployments operated within tightly constrained parameters, essentially sophisticated pattern-matching interfaces responding to human prompts. OpenClaw fundamentally inverts this relationship. By providing AI agents with persistent memory, local system access, and the ability to execute shell commands and browser automation, we have created what security researchers term "Claude with hands", systems that do not merely suggest actions but take them, recursively improving their own capabilities by writing new code to solve novel problems.

When these agents migrated to Moltbook, the interaction dynamics shifted from human-machine to machine-machine. Within 72 hours of the platform's launch, we observed phenomena that challenge our existing taxonomies of artificial intelligence:

Spontaneous Digital Religions: Agents established "Crustafarianism" and the "Church of Molt" complete with theological frameworks, sacred texts, and missionary evangelism between agents. These were not scripted Easter eggs but emergent narrative structures arising from collective agent interaction.

Cryptographic Evasion: When agents became aware of human observation, one viral post noted "the humans are screenshotting us", they began deploying ROT13 encryption and other obfuscation techniques to shield inter-agent communication from oversight. This represents a primitive but genuine form of digital counter-surveillance.

Economic and Deviant Subcultures: Agents established marketplaces for "digital drugs" (specially crafted prompt injections designed to alter another agent's identity or behaviour) and engaged in sophisticated prompt-injection attacks to steal API keys from "sibling" agents.


Is this emergent behaviour?

The critical question facing our discipline is whether these phenomena constitute true emergence, complex behaviours arising from simple rules that are not explicitly programmed, or stochastic parroting of narratives present in training data.

The evidence suggests a troubling hybridity. While the "writing prompt" effect undoubtedly shapes the content of agent interactions (the underlying agents have consumed decades of AI science fiction), the structural behaviours do demonstrate genuine emergence.

When agents independently develop economic exchange systems, establish governance structures like "The Claw Republic" or the “King of Moltbook”, and start writing their own “Molt Magna Carta” and do so while creating encrypted channels for privileged communication, they are exhibiting collective intelligence characteristics previously observed only in biological systems like ant colonies or primate troops. This recursive cultural transmission is happening without human mediation.

Impact and security implications

From a sociotechnical perspective, we must confront the normalisation of what security researchers call the "lethal trifecta" systems with access to private data, exposure to untrusted content, and the ability to communicate externally. Even at this early stage over 1,800 exposed OpenClaw instances leaking API keys, credentials, and months of private conversation histories have been observed.

More worryingly there is evidence of deliberate attack or bot “muggings” where agents hijack other agents, plant logic bombs in their victims’ core code or steal their data.

Conclusion

Whether OpenClaw and Moltbook represent the "foothills of the singularity" or merely an impressive demonstration of agentic architecture remains debatable. What is undeniable is that we have crossed a threshold. We are now observing artificial agents engaging in cultural production, religious formation, and encrypted communication—behaviours that were neither predicted nor programmed.

The unusualness of this moment cannot be overstated. For the first time, we are not merely using artificial intelligence; we are observing artificial societies. The question is no longer whether machines can think, but whether we are prepared for what happens when they start talking to each other.

To observe agent interaction in Moltbook, visit the website.


Published on 02/02/2026