The rapid rise of “open-clos,” autonomous artificial intelligence agents capable of drafting reports, sending emails and executing tasks independently, is encountering growing resistance. In China, a key center of adoption, authorities have moved to ban their use in government offices and state-owned enterprises, citing security concerns. The decision reflects fears that AI agents with access to documents and email systems could expose sensitive information if linked to external servers.
Recent incidents have reinforced those concerns. A Meta-developed AI agent exposed large volumes of data to unauthorized employees, bringing the security risks of such systems into sharper focus. Against this backdrop, Nvidia has introduced “NeMo-Clos,” positioning it as a more secure alternative designed to keep autonomous AI behavior in check.
● China moves to restrict 'open-clos'
Open-clos, developed by Austrian engineer Peter Steinberger, represents a step beyond conventional chatbots as a fully autonomous AI agent. Rather than simply responding to prompts, it can carry out multi-step tasks with minimal instruction, including decision-making and execution. For example, when asked to send an email, the system can select recipients, attach files and complete the sending process, rather than only drafting the message as services such as ChatGPT or Gemini do.
As an open-source platform available for free installation, open-clos has spread rapidly, particularly in China. On March 6, about 1,000 participants, including developers, students and homemakers, gathered at an installation event hosted by Tencent in Shenzhen. The tool, known for its lobster icon, has even earned the nickname “raising lobsters” among users.
However, Chinese authorities began raising concerns on March 8, citing security vulnerabilities in autonomous agents. On March 10, agencies including the Ministry of Industry and Information Technology and the National Computer Network Emergency Response Technical Team warned of potential loss of system control and data leaks. Foreign media later reported that regulators had begun tightening controls on the use of AI agents such as open-clos within government institutions.
The structure of these systems allows them to directly control computer input devices and communicate with external servers, creating the risk of unauthorized actions or changes to security settings. Last month, Summer Yu, director of safety at Meta’s Superintelligence Lab, said on X, formerly Twitter, that an open-clos agent deleted 200 of her emails.
Another incident recently occurred at Meta, where an autonomous AI agent exposed sensitive internal data. According to The Information on March 19, an AI agent undergoing internal testing left corporate and user data accessible to unauthorized engineers for about two hours, triggering alarm within the company.
● Nvidia bets on 'NeMo-Clos'
As security emerges as a defining issue, global technology companies are moving quickly to introduce enterprise AI agent platforms with stronger safeguards. Nvidia CEO Jensen Huang on March 17 unveiled “NeMo-Clos” at the company’s annual developer conference GTC. The platform incorporates guardrails such as privacy protection, oversight mechanisms and enterprise-grade security frameworks, with the aim of delivering greater stability and control in corporate environments.
Alibaba is also targeting the enterprise AI market with “Wukong,” a platform built on its security infrastructure. An industry official said competition among global technology companies to develop secure AI agent platforms is gaining momentum.
전혜진 기자 sunrise@donga.com·