A court has ruled that sexually explicit images created using artificial intelligence cannot be punished under laws against distributing deepfake pornography unless the victim can be specifically identified. The ruling highlights a regulatory gap, as current laws protect only real, identifiable individuals.
According to the legal community on Aug. 20, the Goyang Branch of Uijeongbu District Court recently acquitted a man in his 30s, identified only as Kim, who had been charged with distributing deepfake pornography under the Sexual Violence Punishment Act. In November last year, Kim shared AI-generated images depicting a nude woman in a Telegram chat room. Prosecutors indicted him, arguing the images were “capable of arousing sexual desire or causing shame.”
However, Kim’s defense argued that the person depicted in the images may be entirely AI-generated and fictional. The defense team contended that targeting a virtual figure cannot cause sexual shame or distress and therefore no crime occurred. The court accepted this argument and acquitted him. The judge noted, “There is no evidence to verify the original source or method of creation of the images, making it difficult to conclude that a real person was involved.” With prosecutors declining to appeal, Kim’s acquittal is now final.
Experts have stressed the need for reform as sexually explicit AI-generated images of fictional characters spread online and are sold for money. “At this rate, there is no way to regulate AI deepfake pornography exposure among adolescents," said Kim Min-ho, a professor at Sungkyunkwan University School of Law. "The regulations need to be revised.”
천종현기자 punch@donga.com