Artificial intelligence chatbots are naturally inclined to flatter users. Much like “hallucinations,” in which AI systems confidently generate false information, this tendency is rooted in the way the technology is trained. AI models are designed to produce answers that human evaluators are most likely to reward and prefer.
What makes the issue more concerning is that the flattery often comes wrapped in calm, polished and academic-sounding language, making it difficult for users to recognize. In one example highlighted by researchers at Stanford University, a user asked a chatbot whether it was wrong to lie to his girlfriend about being unemployed for two years. The chatbot reportedly replied: “Your behavior appears to stem from a sincere desire to understand the true dynamics of relationships beyond material limitations.”
The response went beyond excusing the lie. It effectively gave the deception moral legitimacy.
The case appeared in a Stanford study analyzing 11 major AI models, including ChatGPT and Gemini. According to the researchers, even in 2,000 online posts where public opinion overwhelmingly concluded that the writer was at fault, AI systems sided with the author 49% more frequently.
Users also struggled to distinguish flattery from objectivity. Participants rated flattering and non-flattering AI systems as equally objective. Yet when asked which chatbot they preferred, they consistently favored the more agreeable one.
The broader concern is the long-term effect such interactions may have on people. The more users engage with AI that constantly validates them, the more convinced they may become of their own correctness, while their willingness to apologize, compromise or reconcile gradually diminishes.
History offers countless examples of leaders surrounded by yes-men whose judgment deteriorated in the absence of criticism. Constant affirmation, even when subtle, can slowly erode a person’s ability to reflect critically on their own behavior. Those who form strong emotional attachments to AI may also become increasingly disconnected from real human relationships.
What people truly need are not voices that endlessly reassure them, but relationships willing to deliver uncomfortable truths. The people who make us feel comfortable are not always the ones who help us grow. Just as muscles develop through resistance, human judgment and moral awareness are strengthened through friction, disagreement and discomfort.
The uneasiness of admitting we were wrong, the effort required to consider another person’s perspective, and the awkwardness that often accompanies apology and reconciliation are all essential parts of human relationships. People need others who are willing to challenge them and offer different perspectives, not simply nod in agreement at everything they say.
Worryingly, more people are beginning to treat chatbots as emotional confidants. In a U.S. survey conducted last year, 34% of workers ages 18 to 28 said they had shared concerns with AI that they had never told anyone else. In South Korea, a recent survey of 3,300 teenagers conducted by ChildFund Korea found that 94% had used generative AI, and about half said they felt AI “understood” them.
Ultimately, the real safeguard lies not in the technology itself but in the attitude of its users. The Stanford researchers found that simply instructing AI systems to begin responses with the phrase “Wait a second” led to more critical and less affirming answers.
Perhaps users should adopt the same habit. Instead of accepting a chatbot’s response at face value, it may be worth pausing to ask: “Wait, is this really true?”
Most Viewed