Go to contents

AI reshapes legal disputes in troubling ways

Posted May. 02, 2026 07:35,   

Updated May. 02, 2026 07:35


Reading through the case file, I could not help but sigh: “AI again.”

As large language model artificial intelligence systems become more widely accessible, courts and legal professionals are seeing a sharp rise in cases where AI-generated answers are lifted and submitted directly into legal disputes.

Popular tools such as ChatGPT, Claude and Gemini are all based on large language models. These systems are designed to process vast amounts of human language and generate natural-sounding text. But they do not produce factual certainty. Instead, they generate probabilistic outputs based on context. The more a task requires logical reasoning and causality, the higher the risk of error. In that sense, AI responses are not definitive answers, but reflections of language patterns in how people commonly speak and think.

The distortion often follows a predictable pattern. A user asks, “Isn’t this unfair dismissal?” The AI replies, “It is likely to be unfair dismissal.” At that point, the system is not evaluating facts. The question itself is often vague and poorly framed. LLMs are trained on human feedback and are designed to produce helpful, agreeable responses, which can tilt outputs toward confirmation.

The user then follows up: “Explain why it is unfair dismissal.” From there, things begin to unravel. The AI may introduce non-existent legal concepts, fabricated details, altered case references or entirely invented precedents, while continuing to build on the user’s line of questioning. The result can sound coherent and persuasive, even authoritative, but may be fundamentally wrong.

Encouraged by that output, the user files an unfair dismissal claim. The documents often contain phrases such as “the most important issue is the violation of discretionary fundamental rights,” complete with formatting copied directly from AI chat interfaces. The arguments tend to be vague, repetitive and poorly supported.

Increasingly, legal disputes are being filled with submissions that lack clear logic or evidence. Petitions described as “not just a simple complaint,” filings insisting that “this must be carefully reviewed” without meaningful analysis, and lengthy documents claiming to present “the core point” when no clear core exists have become more common. Volume has grown, but substance has not. Time is being spent checking broken links, searching for nonexistent case numbers and trying to verify fabricated sources, rather than focusing on legal judgment.

The burden has also increased in dealing with parties who are increasingly rigid and difficult to persuade. Many no longer accept explanations. After hours of extended interaction with AI systems, they become anchored to conclusions formed through automated dialogue rather than independent reasoning.

It does not take a high-profile case to illustrate the issue. In one example, Krafton reportedly faced a $250 million lawsuit after following AI-generated guidance on how to handle a dismissal. In another, a person involved in a traffic accident with a four-week injury asked what a fair settlement amount would be. After consulting AI on the spot, they demanded 500 million won, a figure difficult to justify through independent reasoning and one that only complicated negotiations further.

In a sexual violence case, a defendant submitted a written statement so extreme in its secondary victimization that it could not be filed in its original form. Given its tone and level of detail, the question arose whether AI had been involved. When asked, the defendant said, “I am not good at writing, so I got help.” How many exchanges with AI went into producing a document that so strongly amplified grievance and hostility toward the victim is unknown.

Persuading such defendants that AI-generated statements may harm their case has become far more difficult than dealing with emotionally charged but self-written arguments. Once individuals become attached to text that neatly reflects their feelings, they are often unwilling to discard it.

The spread of generative AI appears unavoidable, but the social costs are becoming harder to ignore. There is an urgent need for education on the limitations of language models, including their tendency to produce hallucinated information, and on how to ask precise, verifiable questions.

A broader societal adjustment may be needed, similar to the early adaptation period for the internet and smartphones. There is also a growing need for institutional debate over who bears responsibility for verification and how to allocate the rising costs of disputes shaped by inaccurate AI outputs.

The current situation, where humans are increasingly reduced to verification tools for machine-generated text, is deeply flawed.