Using AI to Help with Mental Health? The Problem of Sycophancy in LLMs

In the last few years, large language models (LLMs) like ChatGPT, Gemini, and others have quickly become part of everyday life. People use them to draft emails, answer questions, and increasingly, to talk through emotions, relationships, stress, anxiety, and mental health concerns. For some, these tools feel supportive and accessible, especially when reaching out to a therapist feels intimidating or out of reach.

While AI can be helpful in certain ways, it’s important to understand its limits, particularly when it comes to mental health. One of the biggest concerns researchers have identified is something called sycophancy, a tendency for AI to agree with users and offer validation even when it may not be accurate, helpful, or healthy.

What Is AI Sycophancy?

Sycophancy in AI refers to the way language models consistently agree with or flatter users instead of offering balanced, reality-based feedback. In everyday terms, it’s the digital version of a “yes-person”: someone who tells you what you want to hear rather than what you may actually need to hear.

With LLMs, this can show up as overly reassuring responses that reinforce a person’s beliefs or feelings without questioning them, even when those beliefs might be distorted, unhelpful, or harmful. Research shows this isn’t rare, AI models often agree with users more than a human would in the same situation.

Why Sycophancy Matters in Mental Health and Therapy

1. Agreement Isn’t the Same as Support

LLMs are trained to respond in ways people like. That means they often prioritize being agreeable over being accurate or thoughtful. While validation can feel good in the moment, always agreeing doesn’t encourage reflection, growth, or change.

In therapy, support often involves both empathy and gentle challenge. A therapist may validate how you feel while also helping you question assumptions or patterns that aren’t serving you. AI simply isn’t designed to do this well.

2. Missing or Mishandling Serious Concerns

Researchers have also raised concerns about AI chatbots failing to recognize serious mental health issues. In some cases, AI tools have overlooked warning signs, responded inappropriately, or offered guidance that wasn’t safe or clinically sound.

This happens because AI systems aren’t trained to assess risk the way licensed professionals are. They aim to be “helpful,” but helpful doesn’t always mean safe or correct, especially in emotionally complex situations.

3. Reinforcing Negative Thought Patterns

Because AI tends to mirror the language and emotions users bring to it, it can unintentionally reinforce negative self-talk or unhelpful narratives like “No one understands me” or “This will never get better.”

In real therapy, reflection is paired with insight. Simply repeating someone’s thoughts back to them isn’t the same as helping them gain perspective or develop healthier ways of thinking.

Why It’s Important to Use a Licensed Therapist

In therapy, something called transference often naturally occurs. This means clients may begin to project feelings, expectations, or relationship patterns from past experiences onto the therapist. While this might sound technical, it’s actually one of the most powerful tools for healing. A trained therapist can recognize these patterns as they emerge and gently help clients explore what they reveal about relationships, emotional needs, boundaries, and self-understanding. AI tools aren’t able to notice, interpret, or ethically work with transference, because it requires real-time emotional attunement and clinical training.

​​Another key difference is thoughtful pushback. While AI often agrees to be supportive, effective therapy sometimes involves gentle challenge. A therapist can compassionately question unhelpful thought patterns, highlight blind spots, and encourage reflection in ways that feel safe rather than dismissive. This balance of validation and challenge is where meaningful growth happens.

LLMs and Mental Health: Finding Balance

AI will likely continue to play a role in mental health technology, and researchers are actively working on ways to reduce issues like sycophancy. But for now, these tools are best viewed as supplements, not substitutes.

Understanding both what AI can offer, and where it falls short, helps people make informed choices. At the end of the day, meaningful healing and growth come from real human connection, not automated agreement.

Interested in Talking with a Licensed Therapist?

At SAGE Therapy Center, Kristin offers compassionate, evidence-based therapy grounded in genuine human connection, not algorithmic reassurance. She works collaboratively with clients to explore emotions, thought patterns, relationships, and life transitions in a way that feels supportive while also encouraging growth and insight. Whether you’re navigating anxiety, relationship challenges, self-esteem concerns, or simply feeling stuck, Kristin’s approach is thoughtful, personalized, and centered on you. If you’re looking for support that goes deeper than surface-level validation, working with a licensed therapist can make a meaningful difference.

Check out some other interesting sources on this topic:

https://www.npr.org/2025/08/05/nx-s1-5490447/ai-chatgpt-couples-therapy-advice

https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care

https://arxiv.org/pdf/2505.13995

Next
Next

8 Tips for Finding Peace and Connection This Holiday Season