A man in his 30s, identified as Kim, was acquitted despite being prosecuted this July for creating and distributing pornographic material using artificial intelligence.
Kim used AI to synthesize pornographic videos by combining adult women’s facial photos obtained from Telegram. He was charged under the Act on Special Cases Concerning the Punishment, etc. of Sexual Crimes for distributing fabricated videos, commonly referred to as the “deepfake law.”
His acquittal highlights a loophole in the deepfake law. During the trial, Kim’s defense argued they did not know the identity of the person depicted, suggesting the character may have been a virtual individual generated by AI rather than a real person.
Under the deepfake law, to be punishable, the subject in the pornographic material must be a “person” capable of expressing opposition to the creation of the content. Because a figure generated by AI is not a real person and cannot express opposition, the law could not be applied in this case.
The court applied the legal principle of “in doubt, favor the defendant.” In its ruling, the judges stated that the prosecution’s evidence alone did not prove beyond a reasonable doubt that a real victim existed in the case.
However, the court acknowledged challenges posed by technological advancements. The court noted that the development of photo and video technology has made it increasingly difficult to distinguish between real and artificially synthesized images, highlighting the legal complexities posed by AI-generated content.
This aspect of the ruling deserves close attention. A careful reading of the court’s decision shows that neither the defendant nor the judges categorically denied the existence of a real victim. The possibility of a real victim remains, but authorities failed to prove it conclusively.
This development raises concerns about the future. Defendants facing trial under the deepfake law may rely on the argument that “the victim is not a real person,” disregarding the truth. If such cases are not assigned to investigators with the necessary expertise and determination, there is a high likelihood of acquittals. In effect, the deepfake law exists on paper but cannot be fully enforced.
Recent developments in the United States provide a comparable scenario. Last January, election strategist Stephen Kramer used AI to mimic former President Joe Biden’s voice to call voters and spread false information about voting. The calls came during the primary season for the U.S. presidential election.
After being tracked down by authorities, Kramer and his legal team argued that the calls did not constitute impersonation because the voice in the calls never explicitly claimed to be former President Biden. The jury accepted this argument. While the case did not involve a sexual offense, it shares a common thread with Kim’s case: AI was used to disrupt social order, yet the defendants were acquitted.
However, Kramer could not celebrate. The U.S. Federal Communications Commission (FCC) had already imposed a $6 million fine on him. The calls he made to voters included false caller ID information, violating telecommunications regulations. Although he avoided criminal prosecution, the case demonstrates how administrative penalties can still hold individuals accountable. In a world where AI makes crimes more complex than ever, relying solely on traditional legal approaches may no longer be sufficient.
Most Viewed