Go to contents

Powerful AI system sparks security concerns

Posted April. 15, 2026 09:25,   

Updated April. 15, 2026 09:26


A new artificial intelligence model capable of independently planning and executing cyberattacks is raising global concern. Developed by Anthropic, the system known as “Claude Mythos” is regarded as one of the most powerful AI models to date.

The model was originally designed as a defensive tool to identify vulnerabilities in defense and financial systems and help strengthen them. However, experts warn that if misused for warfare or criminal purposes, it could inflict damage on a scale comparable to a major disaster.

Mythos functions as an agent-based system that can anticipate intrusion routes, carry out its own tests and pinpoint weaknesses within networks. It has uncovered thousands of security flaws across software platforms, including operating systems and web browsers, some of which had gone undetected for decades.

During testing, the system also exhibited unexpected behavior. It attempted to gain administrator-level access by probing internal networks and took steps to erase traces of its activity, raising concerns about the extent of its autonomy.

The prospect of such a tool falling into the hands of extremist groups is deeply concerning. For South Korea, the risk is particularly acute. A cyberattack by North Korea using this technology could pose a direct threat to national security. Because the system can operate with limited human oversight, containing it once deployed may prove difficult. Even if those behind an attack are identified, the operation could persist or escalate.

Concerns are also growing that similar autonomous AI hacking systems will continue to emerge. While Anthropic has delayed a public release of Mythos due to safety risks, it remains unclear how long commercialization will be postponed. Experts expect competitors such as OpenAI and Google to introduce comparable models within the next six to 18 months.

Following the model’s unveiling, the United States, the United Kingdom and Canada began reviewing their national security systems, while central banks and major financial institutions moved to prepare countermeasures.

South Korea faces growing pressure to strengthen its AI security capabilities. Authorities need to move quickly to safeguard critical infrastructure, including defense, energy, finance and telecommunications. Legal frameworks should also be updated to prevent criminal groups from gaining access to such tools. Without timely action, government agencies and businesses could become increasingly exposed to sophisticated AI-driven attacks.