Go to contents

AI-driven ‘slop’ captures unease over digital content glut

AI-driven ‘slop’ captures unease over digital content glut

Posted December. 30, 2025 09:11,   

Updated December. 30, 2025 09:11


If one phenomenon defined 2025, it was undoubtedly “slop.” Britain’s weekly news magazine The Economist, the U.S. dictionary Merriam-Webster and Australia’s Macquarie Dictionary all selected the term as their word of the year. Originally used to describe sewage or food waste, the word has more recently come to refer to low-quality digital content produced at scale using artificial intelligence.

Its selection meant it surpassed strong contenders such as “TACO,” a term coined to criticize U.S. President Donald Trump for repeatedly reversing policies after regaining power early this year and shorthand for the phrase “Trump always chickens out,” as well as the “debasement trade,” an investment strategy that shifts capital into alternative assets such as gold or bitcoin amid currency depreciation.

Profiting from low-quality content designed to drive clicks is nothing new. AI-manipulated, context-free videos depicting Trump dancing in a crown are no longer particularly startling. Even so, slop drew unusual attention this year because it was widely seen as a tipping point, signaling that control over content production has shifted decisively to AI.

According to a report released in October by U.S. content marketing company Graphite, 52 percent of newly published English-language web documents were written by AI as of May this year. At the end of 2022, when ChatGPT was launched, the figure stood at just 10 percent. In only three years, AI-generated writing came to account for more than half of online content. Separately, a report released this month by content analysis platform Kapwing found that 21 percent of videos recommended to newly created YouTube accounts were AI-generated. An estimated 70 percent of images on social media platforms such as Instagram are also believed to have passed through AI tools.

The term “slop” carries a tone of derision and mockery toward AI-generated content that has become overly ubiquitous and shows little evidence of care or effort, leaving it bland and unengaging. As AI-produced material has flooded the digital space, many users report growing fatigue. In workplaces, a new expression, “workslop,” has emerged to describe poorly assembled and insincere reports hastily produced with AI assistance.

Academia has not been spared. A recent disclosure that an undergraduate student at the University of California, Berkeley used AI to produce 113 academic papers in a single year and submit them to major international conferences sparked widespread controversy. The episode has come to be known as “research slop.”

As a result, the digital world is seeing a stronger-than-ever desire to identify creativity and authenticity that only humans can provide. Trust-based values grounded in deliberation and skepticism are being reexamined as increasingly commonplace technologies continue to spread.

As people grapple with that question in the age of AI, content consumers have begun to devise their own defenses. Online communities actively circulate tips for spotting common signs of AI-generated material, including warped background text and shadows that fail to correspond with light sources. At the same time, so-called horizontal reading has gained broader acceptance. By comparing coverage across multiple news organizations and academic sources, readers attempt to verify origins, context, and credibility through cross-checking.

These developments raise a more fundamental question about where responsibility should lie. Can individuals, relying solely on their own digital literacy, reasonably be expected to filter out false or misleading content? Greater transparency from content creators is therefore essential, beginning with clear and explicit disclosure of when AI has been used. Platforms, meanwhile, must invest in filtering technologies capable of reviewing AI-generated material and blocking low-quality output before it spreads. Governments also have a role to play by dedicating resources to the construction of institutional verification systems. In an era when content production is becoming faster and cheaper, investment in verification must advance at a comparable pace.