Go to contents

Fighting AI-generated fake information

Posted June. 17, 2023 08:02,   

Updated June. 17, 2023 08:02

한국어

Many argue that generative artificial intelligence (AI) has the potential to significantly reduce the time required for learning and enhance work efficiency by gathering scattered information, but I find myself skeptical about its actual effectiveness. The reason for my doubt stems from the growing prevalence of fake images created by AI, which not only causes undue perplexity but emotional distress.

The images encompass various scenarios, such as Pope Francis, a symbol of frugality and integrity, strolling down the street in a luxury-brand padded long jacket from Italy (released in March 2023), the U.S. Pentagon building shrouded in dark smoke, evoking a massive explosion (released in May 2023), and the Ukrainian president depicted in a moment of surrender to Russia with soldiers waving white flags (released in March 2022).

These AI-generated images gained even wider circulation due to their fraudulent nature. Upon closer examination, one can easily identify telltale signs of AI-generated forgeries, such as the Pope's awkward hand position in this instance. However, a mere glance at these images is likely to deceive most observers.

AI's deception extends beyond images; it now generates fake textual information at an alarming rate. NewsGuard, a U.S.-based agency that rates news reliability, has identified around 150 websites that masquerade as legitimate news sources but are actually populated with entirely AI-generated texts. These texts fabricate recent occurrences out of past events, while others falsely declare individuals deceased despite being alive and well.

Readers should always question, critically analyze, and verify the information they encounter to avoid being deceived by the influx of misinformation. Experts suggest that understanding AI language includes acknowledging its potential to deliver incorrect or fraudulent messages, implying that AI may create more work instead of reducing it for individuals like me.

If that were the extent of AI's capabilities, it wouldn't be too troublesome. Users could discern real from fake by considering the source of information. If a highly trusted media outlet is the source, we can depend on the information it provides

The real concern lies in the fact that even information from highly reputable media outlets can be fabricated and disseminated. Consider a hypothetical situation where a photo falsely attributed to The Dong-A Ilbo circulating on social media, with the photo itself being entirely fake, including its watermark.

As AI advances, differentiating between misinformation and truth becomes increasingly challenging. However, there are potential solutions in the works. News outlets like the BBC and New York Times are exploring the adoption of "Project Origin," which involves embedding a digital fingerprint in every news article they publish. This green digital fingerprint could turn red if any indication of manipulation is detected. Additionally, suggestions include incorporating metadata containing content usage history and content creator information into the articles.

Ironically, the proponents of these anti-fabrication solutions are big-tech companies such as Microsoft and Google, which are leading the global dissemination of AI technology. While they possess the deepest understanding of the technology, we cannot help but remain skeptical of the situation. It is more plausible to consider that these big-tech firms are developing these solutions to maximize their profits and control the entire business system rather than purely out of good intentions as socially responsible companies.

Without adequate knowledge of the rules governing AI, we risk becoming victims of these companies and their strategies. It is crucial that we educate ourselves sufficiently to not only understand the rules but also have the ability to shape them according to our own interests.