Go to contents

Fake royal tale spurs AI misinformation concerns

Posted May. 24, 2025 07:20,   

Updated May. 24, 2025 07:20


In 1716, chaos allegedly erupted inside the royal palace during the Joseon Dynasty. A 22-year-old prince, enraged over his mother’s death, supposedly threatened a court lady and attempted arson. According to the tale, he shouted, “How dare you insult my mother!” before trying to set fire to the palace. This dramatic account ends with a striking detail: the prince later ascended the throne.

This narrative was recently featured in a viral short video posted by a popular Korean YouTube channel known for retelling Korean history through a modern lens. The story has captivated viewers, but it’s entirely false.

The only historical fact in the video is the prince’s age in 1716. In reality, King Yeongjo’s mother was Royal Noble Consort Suk of the Choi clan, not Lady Park, as claimed in the video. The record from June 6, 1716, in the Annals of the Joseon Dynasty, cited in the video, contains no such incident. Lady Park was a concubine to other royals in different centuries. The story resembles a piece of historical fiction or alternate history more closely than any documented truth.

The channel typically blends fact with popular storytelling, which makes the blatant fabrication in this video especially surprising. Why was such a glaring falsehood included?

One likely cause is the growing use of artificial intelligence to plan, produce, and edit online content. As AI tools become more widespread, errors and misinformation in digital media are becoming increasingly common. Experts believe this may be due to “hallucination,” a phenomenon where AI generates incorrect or fabricated information.

No AI system has fully resolved the hallucination problem. Unlike past technologies such as search engines or social media, which often reflected users' biases, generative AI can introduce errors in the information it creates. While AI can be helpful, it can also produce convincing falsehoods. The risk becomes more alarming as AI takes on greater responsibilities, including those involving human life and safety. When a person makes a mistake, they can be held accountable. But when AI is wrong, who takes responsibility?

The concerns extend beyond AI-generated content to the foundations of how AI is trained. Generative AI models like ChatGPT are built by mimicking and editing existing content. Most were developed by learning from massive data sets, including copyrighted journalism and other creative works, often without permission. Developers tend to frame these sources as mere “data,” but in many cases, they constitute protected intellectual property. This issue is at the heart of a lawsuit The New York Times filed against OpenAI in December, alleging copyright infringement.

“There are strong reasons to keep promoting human creativity, even in the age of AI,” said Choi Seung-jae, a law professor at Sejong University, during a recent seminar. Some studies suggest that repeated AI training on AI-generated outputs can degrade the quality of the models themselves. Without proper intellectual property compensation, content creation's foundation could face serious challenges. Who benefits from building artificial intelligence atop the ruins of human creativity?