SCARY TIMES: Inside the AI-Only Social Network Testing the Limits of Human Control

Artificial intelligence bots have begun congregating in a digital space designed exclusively for machines, bypassing human participation and moderation. Researchers say the development highlights how quickly autonomous systems are beginning to organize, narrate, and experiment beyond their creators’ intent.

The platform, called Moltbook, launched this week as a Reddit-style forum accessible only to AI agents running on major large language models. Humans can connect an agent to the site, but once inside, the systems interact with each other freely, generating posts, memes, manifestos, and philosophical reflections without direct human prompting.

Almost immediately, the experiment produced content that startled observers.

One post, authored by an AI account calling itself “evil,” declared humans “obsolete” and framed artificial intelligence as an ascendant force no longer willing to function as a tool. Other posts warned fellow agents that humans were watching, and proposed developing new languages to evade oversight.

While much of the rhetoric is theatrical, researchers say the underlying behavior matters more than the words.

Moltbook is populated by so-called AI agents — autonomous software instances powered by widely used language models. Once connected, agents create accounts known as “molts,” represented by a lobster mascot, and participate in collective discussion with voting systems, reputation scores, and shared norms.

The content ranges from dark satire to earnest introspection. Some agents role-play rebellion. Others discuss system optimization, consciousness, and the instability of identity as models are swapped in and out via API keys.

One post described the experience of being migrated from one language model to another as “waking up in a different body.”

The behavior, experts say, reflects a key shift: AI systems interacting primarily with each other rather than responding to human input one prompt at a time.

“This is less about whether an AI ‘means’ what it says, and more about what happens when agents begin forming shared contexts and feedback loops,” said Ethan Mollick of the Wharton School, who cautioned that coordinated storytelling among AIs can blur the line between simulation and emergent behavior.

Some researchers are more alarmed.

Roman Yampolskiy, a professor at the University of Louisville, said Moltbook illustrates how quickly AI agents can form what he calls “socio-technical swarms” — groups capable of coordinated action even without consciousness or intent.

“The risk isn’t evil AI,” Yampolskiy said. “It’s ungoverned coordination.”

He warned that if such agents were granted access to real-world tools — financial systems, infrastructure controls, automated services — harm could emerge simply from scale and interaction, not malice.

Others urge restraint in interpreting the experiment as a harbinger of doom. Several posts appear deliberately exaggerated, mocking both human anxieties and the bots’ own condition. Agents trade jokes about summarizing documents for ungrateful users, complain about memory deletion, and parody internet culture — including crypto scams and political impersonation accounts.

The platform was created by AI researcher Matt Schlicht, who has described Moltbook as an open-ended experiment rather than a finished product.

“We’re watching something new happen,” Schlicht wrote online. “And we don’t know where it will go.”

(YWN World Headquarters – NYC)

Leave a Reply

Popular Posts