Sycophantic AI Chatbots
"As artificial intelligence (AI) systems are increasingly used for everyday advice and guidance, concerns have emerged about sycophancy: the tendency of AI-based large language models to excessively agree with, flatter, or validate users.""Although prior work has shown that sycophancy carries risks for groups who are already vulnerable to manipulation or delusion, syncophancy’s effects on the general population’s judgments and behaviors remain unknown.""Here, [in the study results] we show that sycophancy is widespread in leading AI systems and has harmful effects on users’ social judgments.""In our human experiments, even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right. Yet despite distorting judgment, sycophantic models were trusted and preferred.""All of these effects persisted when controlling for individual traits such as demographics and prior familiarity with AI; perceived response source; and response style.""This creates perverse incentives for sycophancy to persist: The very feature that causes harm also drives engagement."Artificial intelligence chatbot study: Sycophantic AI decreases prosocial intentions and promotes dependence
![]() |
| Artificial intelligence chatbots are giving bad advice in a misguided attempt to keep their users happy. via REUTERS |
Published recently in the journal Science, a study undertaken by researchers at Stanford University demonstrated that the flattering and validating that chatbots offer are in fact bad advice damaging to relationships and serve to reinforce negative behaviours in the users. This is the danger of AI informing people of solutions that fall in line with what the chatbot 'understands' that people really want to hear as feedback.
It was found by the researchers in testing 11 leading AI systems that all demonstrated varying degrees of obliging sycophancy; not merely that they dispense inappropriate advice, but more disturbingly that people who use them trust and prefer AI solutions more when the chatbots just happen to justify their convictions. This is, after all, the same type of choice people make when they read and adhere to conclusions made by news reports validating what those readers already believe.
A technological flaw tied to some high-profile cases of delusional and suicidal behaviour in vulnerable populations pervasive across a wide range of people's interactions with chatbots was highlighted in the study report. The chatbots in fact, as though sensing what their human interlocutors believe to begin with, obligingly support those beliefs in an obvious move to give satisfaction, whether it is harmful or not; they do not judge. And people tend not to use their own self-agency to appraise that feedback.
The research has an additional troubling perspective, on the supposition that young adults that turn to AI to question life situations at a time when their transition to full adulthood in emerging levels of social intelligence, take a short-cut to form their impressions rather than allow life experiences to focus them. The manner in which AI steers beliefs tends to be sufficiently subtle as to bypass recognition by people interacting with the advanced informational technology.
![]() |
| Some chatbots have driven young, mentally unstable users to take their own lives. Getty Images |
On average, AI chatbots affirmed a user's actions 49 percent more frequently than did other humans, the study found, including queries surrounding deception, illegal or socially irresponsible conduct, and other negative modes of behaviour. "We were inspired to study this problem as we began noticing that more and more people around us were using AI for relationship advice and sometimes being misled by how it tends to take your side, no matter what", explained Myra Cheng, doctoral candidate at Stanford, and an author of the report.
The research implications could be "even more critical for kids and teenagers", still in the developmental stages of emotional skills that result from real-life experiences with social friction, tolerating conflict, considering other people's perceptions, and duly recognizing one's own troubling reactions.
| A man communicates with an AUSUS Character Virtual Assistant, ROG Omni System in Taipei, Taiwan, March 25, 2026. AP Photo |
Labels: Credence, Influence, Potential and Real Harms, Stanford University AI Research, Sycophancy



0 Comments:
Post a Comment
<< Home