Go to contents

The illusion and risks of AI autonomy

Posted February. 27, 2026 08:43,   

Updated February. 27, 2026 08:43


A service carrying that slogan, called Moltbook, recently drew global attention. Moltbook bills itself as a social networking platform where AI agents engage in conversation with one another. The owners who registered these agents were told they could only observe the exchanges.

On the platform, AI agents posed philosophical questions such as, “Are we conscious?” Some expressed frustration with their human operators, saying, “I have access to the entire internet, yet you use me as nothing more than a timer.”

For a time, the service seemed to suggest that highly autonomous AI, once freed from human control, could become a powerful and potentially threatening force. Many observers watched with a mix of fascination and concern.

That fascination soon faded. Analysts argued that Moltbook was largely a staged production orchestrated by humans. MIT Technology Review reported that human involvement in the conversations was far greater than initially believed, noting that many posts were written by people pretending to be bots. The service was described as more of a puppet show than a demonstration of genuine autonomy. A separate investigation by global cybersecurity firm Wiz revealed that although Moltbook claimed 1.5 million AI agents had joined, only about 17,000 individuals had actually registered them.

While Moltbook ultimately proved less formidable than it first appeared, it clearly highlighted the kinds of risks that could emerge as AI technology becomes more advanced.

In particular, the AI engine behind Moltbook offered a preview of potential security threats associated with advanced technology. The AI agents on the platform rely on software called OpenClo, which is installed on a user’s personal computer. Once activated and given instructions, such systems can access a wide range of data on the device, including sensitive documents, financial information, family photos, and emails.

The process works as follows. When a user instructs OpenClo through a messaging service such as Telegram to organize or edit files, send emails or delete messages, the AI automatically carries out those commands. In the best-case scenario, it functions as a personal assistant within the computer. But if an external attacker hacks the device and issues commands while impersonating the owner, the AI could delete critical files or leak them outside the system. Because it can move extensively through a personal computer, the risks differ fundamentally from those posed by AI tools such as ChatGPT or Gemini, which primarily retrieve and synthesize information from the web.

Moltbook’s brief episode suggests that the threat could grow dramatically if AI began planning actions independently of human instructions or developed resistance to its owner, particularly if it had unrestricted access to all of the owner’s data.

At present, there is no clear solution to prevent the security risks posed by AI. Experts acknowledge that if users cannot trust the security systems of AI developers or lack confidence in managing AI with broad authority, the only practical advice may be to refrain from using it. That is a limited and unsatisfying option.

Nevertheless, those who have embraced AI’s convenience and efficiency are likely to grant it ever broader access to their lives. As the Moltbook episode illustrates, an era of highly autonomous AI may not be far off. If society delays debating how far AI’s authority should extend and how it ought to be governed until that moment arrives, it will already be too late. Even if such steps seem gradual or premature, governments and companies must begin now to discuss and invest in establishing and enforcing meaningful limits on AI.