It began with a simple wish shared by many fans: to see a favorite singer perform up close. Kim, an office worker in his late 20s, joined the frantic scramble for tickets like everyone else. But in a booking race often decided in a split second, he repeatedly came away empty-handed. The frustration soon gave way to a more dangerous curiosity.
After purchasing a macro program through Telegram, Kim had a startling experience. VIP seats that once seemed impossible to secure were suddenly his with a single click. At first he helped acquaintances reserve tickets as well. Over time, however, he began to wonder whether he could turn the practice into a source of pocket money.
What set Kim apart from ordinary scalpers was artificial intelligence. He had no knowledge of coding. Whenever ticketing platforms strengthened their security and blocked macro programs, he turned to generative AI tools such as ChatGPT. He asked the system to analyze vulnerabilities in ticketing platforms and create ways around them. The AI responded like a dutiful assistant, updating the code each time defenses were tightened. Security systems that once could be breached only by skilled hackers were now bypassed by Kim, who had no background in information technology. He eventually quit his job and became a full-time ticket scalper.
Using family members’ accounts to sweep up tickets and resell them at prices dozens of times higher, Kim earned about 200 million won over two years. His scheme came to an end in August last year when undercover police arrested him while he was selling scalped tickets outside a concert venue. An examination of his phone revealed that members of the scalping ring were sharing tips in chat rooms on how to use AI to disable the security systems of ticket reservation websites. Cyber Investigation Unit 2 of the Northern Gyeonggi Provincial Police Agency referred 16 members of the group to prosecutors.
AI has surfaced in other crimes as well. Kim So-young, 21, who was recently indicted in a motel drugging and serial killing case, carefully planned her crimes by asking ChatGPT questions such as whether mixing drugs with alcohol could cause death and what the lethal dosage might be. In June last year, suspects in a group sexual assault case in Gapyeong County, Gyeonggi Province, entered queries including whether first-time offenders in a gang rape would face prison sentences and whether reaching a settlement could reduce punishment.
Once suspects are arrested, their conversations with AI can become evidence that reveals their intent. Yet uncovering those records through forensic analysis and imposing harsh punishment happens only afterward. The suffering of victims cannot be fully undone.
Companies behind systems such as ChatGPT and Gemini say their models are designed to refuse responses that assist criminal activity. In reality, however, they tend to filter only the most explicit questions. Their safeguards are easily bypassed through so-called prompt jailbreaks. Experiments by researchers in South Korea and abroad have shown that users can evade these restrictions about 80 percent of the time simply by framing questions as part of a fictional scenario or dividing instructions into several separate prompts. Some observers suspect AI companies tolerate these loopholes out of concern that stricter controls could drive users away.
The solution is straightforward. Assisting crime must carry consequences rather than benefits. AI companies should share responsibility if they fail to detect and prevent criminal intent in advance. Arguments that holding them accountable would be like punishing a kitchenware shop owner because someone used a knife in an assault no longer fit the reality of this era. Today’s AI resembles a crime consultant that politely coaches someone asking how to stab another person and does nothing to report it. If platforms fail to impose their own safeguards, regulation will inevitably fill the gap.
Most Viewed