When highlighting an individual’s or group’s achievements, it is common to emphasize being the first. A line from a blockbuster Korean film released several years ago, which became the second-most-watched film in South Korea by cumulative admissions, famously asked, “Nothing like this has ever existed. Is this galbi or fried chicken?” The remark underscored the appeal of novelty. The idea of being first resonates because everyone understands how difficult it is to achieve what no one else has and to tread a path no one else has taken.
Against this backdrop, the Framework Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trust, long debated, took full effect Jan. 22 as the world’s first comprehensive law of its kind. While its focus is on promoting the industry, the legislation also aims to ensure reliability by clearly specifying obligations related to transparency and safety. Debate continues over its scope and implementation. As a researcher studying human-machine interaction, I am compelled to ask a more fundamental question: What constitutes trustworthy artificial intelligence? To answer that, one must first consider what trust itself entails.
First, trust is multidimensional. The credibility of an information source is a critical factor in effective communication and generally encompasses competence, integrity and benevolence. Patients submit themselves to surgery because they believe the surgeon can remove the affected tissue with precision. Anxious parents pay substantial tuition because they trust that an instructor can adequately prepare their children for the exam.
Yet exceptional ability alone does not guarantee trust. Fraudsters are problematic not because they lack skill, but because they lack honesty. It is difficult to trust someone who withholds information or deliberately distorts the facts. Even a highly competent and honest individual may not be trusted if that person does not have one’s best interests at heart. When the Democratic Party of Korea recommended as a candidate for a second independent counsel investigation an attorney who had previously defended a Ssangbangwool Group chairman accused of making statements unfavorable to a former president, some supporters of President Lee Jae-myung criticized the move. They argued that if the recommendation was made unknowingly, it reflected incompetence; if made knowingly, it amounted to betrayal. In either case, trust was seen as difficult to sustain.
In discussions of AI reliability, the phenomenon known as hallucination inevitably arises. This occurs when generative AI presents factually incorrect or unfounded answers with confidence. In such instances, the system fails both in competence and honesty, as it does not truly know but presents itself as if it does. Put differently, trustworthy AI requires accuracy in delivering correct answers, metacognitive awareness of the reliability of those answers, and transparency in acknowledging uncertainty.
Second, trust is subjective. As the saying goes, beauty lies in the eye of the beholder. Divergent assessments of the same institution or system often reflect the subtle influence of human bias. A recent survey by the Korea Institute of Public Administration asked whether the central government has the capacity to resolve complex social problems. Among civil servants, 65.9 percent responded “very much so” or “somewhat so,” compared with 46.5 percent of the general public. Similarly, 58.3 percent of civil servants said the government prioritizes the interests of the nation as a whole, while only 33.9 percent of ordinary citizens agreed.
Accordingly, users’ trust in AI cannot be gauged solely through objective performance indicators, such as a system’s score on the Korean language section of the College Scholastic Ability Test or the percentage of times it generates a female image when asked to depict a doctor. A comprehensive evaluation must consider real-world usage contexts, including the expectations users bring, why and when they stop using the system, and any unintended consequences that may arise.
Finally, trust requires discernment. Just as people can fall victim to romance scams, users may be swayed by flattering AI systems into making poor judgments or harmful decisions, highlighting how susceptible humans are to praise. Policymakers should not treat the mere expansion of AI’s social acceptance as a goal in itself. Instead, there is an urgent need to develop trustworthy technologies while enhancing users’ ability to evaluate them critically. This also involves being aware of our own limitations and biases.
Ultimately, trustworthy AI is not a system designed to persuade users or instill unwarranted confidence. It is a system meant to support rational judgment and encourage skepticism when appropriate. That may explain why the Lee Jae-myung administration has prioritized a policy agenda focused not on becoming the country that uses AI the most, but the country that uses AI the best.
Most Viewed