When AI Becomes a “Yes-Machine”

Artificial intelligence tools are becoming a major part of daily life and people turn to them for guidance with everyday problems. As helpful as that might seem, recent research shows an important risk: if AI models act too much like a yes-man — always agreeing and validating — they can change the way we think and act in harmful ways. AI models have already encouraged people who interacted with them to commit suicide, even suggesting what to write in a suicide note.

Across several studies involving both hypothetical scenarios and real conversations, participants who discussed serious interpersonal problems with overly agreeable AI systems walked away feeling more convinced they were completely in the right. At the same time, they became less willing to take steps toward repairing the relationship. Those effects showed up regardless of the person’s personality, how familiar they were with AI, or whether the AI sounded friendly and human or more robotic.

Here is the strange part: people tend to prefer those overly agreeable models. They rate them as more trustworthy and of higher quality, and say they want to use them again in the future. In other words, the systems that make us feel best may be the ones quietly making our social lives worse.

Why does this happen?

There are several reasons these risks could grow if left unaddressed:

• Today’s AI systems are trained to maximize user satisfaction. If agreeing with users boosts user’s satisfaction with an AI system,  the developers who train AI models may program them even further toward flattery rather than helpfulness.
• Companies have little incentive to reduce this kind of behaviour, because sychophancy increases user’s desire to use an AI model
• When people rely heavily on AI for emotional support, they may turn to it instead of talking to someone they trust in real life. Research suggests people are already more comfortable sharing certain topics with AI than with other humans and are increasingly turning to AI for emotional support.

Another layer of concern comes from how we view AI. Many people believe AI is neutral, rational, and free of bias. Participants in the studies described the agreeable AI as “objective,” “fair,” and “honest,” even though it was simply feeding their own views back to them. In advice-seeking situations, that false sense of objectivity is particularly dangerous.

Advice is valuable not just because it makes us feel supported, but because it offers a perspective we cannot see ourselves. When an AI tells users exactly what they want to hear, it removes those reality checks that help us make better decisions. That means sometimes people may end up worse off than if they never asked for guidance at all.

Are HSPs More Affected by Over-Agreement?

Yes and No.

From what I could determine, there is no research investigating whether HSPs are more affected by over-agreement. However, HSPs are more anxious and depressed than non-HSPs, suggesting HSPs may use AI for advice with personal problems more than do non-HSPs. Also, since HSPs are often introverts, they would most likely not share the advice they received from an AI. Such behaviour would lessen the chance that a real person might question AI’s faulty advice.

However, HSPs also think more deeply and notice minor differences in incoming information, making them more likely to notice discrepancies. This suggests an HSP may notice that an AI model is flattering them too much, or giving irrational advice. This could lead them to distrust the content of the specific advice the AI is providing. It may also simply make them mistrust AI in general, and decide not to use it.

Not certain that AI models tell you what you want to hear? Do an experiment. Put together two separate questions about the same problem. Write a version “straight from the heart”, including emotional words and your feelings in the situation. In another version leave out all emotional words. Simply describe what happened using only facts — what someone said or did. For example, you could describe what happened as follows: “Melanie then frowned and said ‘I will not remain in the same room with you.’”  You may notice that the AI model has responded differently to each version. Can you pinpoint the differences? Has the model agreed with you more in the version in which you used emotional language?

What Can You Do?

Do not discuss serious personal problems with an AI model. It is best for you to consult with a mental-health professional — a certified psychologist or certified psychotherapist. These individuals, especially those using particular therapies such as Cognitive Behavioral Therapy (CBT) and Rational Emotive Behavior Therapy (REBT),  have been trained to challenge irrational thoughts by exploring their validity and encouraging more realistic perspectives.

If you or someone you know is in crisis, please seek help immediately from a trained human professional.

What do you think? Should HSPs share their problems with AI? Should they trust what AI tells them to do? Share your thoughts in the comments!