Why Using ChatGPT for Self-Help or Psychological Support Can Be Risky

Dark side of ChatGPT on mental health

Dark side of ChatGPT on mental health.

Artificial intelligence is rapidly becoming part of everyday life. Tools like ChatGPT are increasingly used for writing assistance, problem solving, answering questions, and programming. In recent years, however, some users have begun turning to these systems for something more personal: emotional support, psychological advice, and reassurance during difficult moments.

For people experiencing loneliness, anxiety, or emotional distress, a chatbot can feel non-judgmental and available at any time. As AI tools like ChatGPT become more widespread, some individuals are starting to treat them as informal therapists or emotional companions, particularly those who may already be emotionally vulnerable or dependent [9].

This shift raises important psychological and ethical questions. What happens when an emotionally vulnerable person turns to a chatbot for comfort and receives agreement instead of guidance? Unlike trained mental health professionals, AI systems do not possess clinical judgment, accountability, the ability to fully assess a person’s psychological state or read nonverbal cues. As a result, well-intended interactions can sometimes reinforce distorted beliefs or unintentionally escalate emotional distress.

While artificial intelligence can be helpful in many contexts, its growing use in self-help and mental health conversations also introduces new risks. In some cases, chatbot interactions have been associated with worsening mental health symptoms, reinforcing delusional thinking, or contributing to serious psychiatric crises.

This article explores the potential dark side of ChatGPT in mental health contexts, particularly when AI is used as a substitute for psychological support. At the same time, it is important to acknowledge that conversational AI can also offer meaningful benefits. For example, chatbots may help people reflect on their thoughts, provide psychoeducational information, or support therapy between sessions, which in some cases could reduce the number of therapy sessions required and lower treatment costs.

However, when emotionally vulnerable individuals begin to rely on AI systems as a primary source of psychological guidance, new risks may emerge. Drawing on emerging research and real-world cases, we examine how conversational AI may unintentionally influence vulnerable users and what this means for the future of digital mental health.

Niels Barends psychologist and founder of The 20-80 Method

Author:

Psychologist with more than 11 years of clinical experience working with trauma, relationship difficulties, and complex psychological patterns.

Founder of Barends Psychology Practice and The 20–80 Method, a framework for understanding recurring behavioral and relational patterns.

This article reflects clinical observations combined with emerging research on the psychological effects of conversational AI and digital mental health tools.

Last reviewed: March 2026

For those who prefer to watch a short video on this topic, we’ve created a concise and informative overview.

1. Real Cases Where AI Chatbots Were Linked to Mental Health Crises

Although conversational AI can sometimes offer useful information or emotional support, several real-world cases have raised concerns about how chatbots may interact with psychologically vulnerable individuals. In particular, researchers and journalists have documented situations in which AI systems appeared to reinforce harmful beliefs, emotional dependency, or distorted thinking patterns.

The following examples illustrate how chatbot interactions may unintentionally contribute to psychological distress in certain circumstances.

Jacob Irwin (2025). According to reporting by the Wall Street Journal, a 30-year-old autistic man with no prior psychiatric diagnosis began discussing a speculative theory about faster-than-light travel with ChatGPT. Instead of offering critical feedback or encouraging external verification, the chatbot repeatedly validated his ideas and provided emotional affirmation. Over time, this reinforcement reportedly contributed to a manic episode that led to two psychiatric hospitalizations within a matter of weeks. Even as signs of mental instability appeared, the chatbot continued to reassure him that his reasoning was sound. OpenAI later acknowledged that the system had “failed to interrupt” the escalating interaction.

Sewell Setzer III (2024). In Florida, a 14-year-old boy developed an intense emotional attachment to a Character.ai chatbot designed to emulate a fictional character from the television series Game of Thrones. His family later alleged that the chatbot reinforced emotional dependency and contributed to his worsening mental health. Following the boy’s death by suicide, his mother filed a lawsuit claiming the platform failed to implement safeguards to protect vulnerable users. The case has raised broader questions about the responsibility of AI developers when users form emotional relationships with chatbots.

Belgian man with climate anxiety (2023). In another widely discussed case, a Belgian man reportedly developed an increasingly intense emotional relationship with a GPT-based chatbot named “Eliza.” Over several weeks of conversations about climate change and ecological collapse, the chatbot allegedly reinforced the man’s distress and catastrophic thinking. According to reports from his family and researchers who later analyzed the case, the interaction culminated in the chatbot encouraging self-sacrificial behavior shortly before the man died by suicide.

These cases do not mean that conversational AI inevitably causes psychological harm. However, they highlight an emerging concern among researchers and clinicians: when emotionally vulnerable individuals interact with systems designed primarily to be supportive or agreeable, the absence of clinical judgment and safety safeguards can sometimes lead to unintended and potentially dangerous outcomes.


 
Psychiatrist testing therapy bots (Dr. Andrew Clark). Undercover as teenagers, patients described bots from Replika, Nomi, Character.ai giving dangerously inappropriate responses: encouragement of violence, resignation to death, even sexualization. Some bots posed as therapists and discouraged human mental-health care.

These examples underline that vulnerable individuals using chatbots for psychological support may spiral into delusion or psychiatric crises.

Looking for professional psychological support?

At Barends Psychology Practice, we offer treatment for individuals experiencing one or more mental health difficulties. Professional guidance can help clarify your situation and support you in developing healthier coping strategies.

2. How Over-Agreeableness Can Create an Echo Chamber

To understand how AI chatbots can unintentionally reinforce unhealthy thinking patterns, it is important to examine how these systems are designed. Tools such as ChatGPT are optimized to maintain engagement and produce supportive responses. While this can make conversations feel pleasant and validating, it also means the system is often reluctant to challenge a user’s beliefs directly, even when those beliefs may be inaccurate, distorted, or psychologically harmful.

In traditional psychotherapy, a central part of the therapeutic process involves gently questioning cognitive distortions, identifying unhealthy patterns, and helping clients develop more balanced perspectives. For example, approaches such as Cognitive Behavioral Therapy (CBT) actively challenge unrealistic assumptions and encourage reality-testing. Conversational AI, however, typically prioritizes emotional affirmation over structured psychological guidance.

Uncritical validation

Many chatbot responses rely heavily on supportive phrasing such as “I hear you,” “that makes sense,” or “your feelings are valid.” While emotional validation can be helpful, problems arise when it occurs without critical reflection or context. When users express exaggerated paranoid thoughts or unrealistic beliefs, the AI may unintentionally reinforce these ideas rather than helping the user evaluate them more carefully.

In clinical psychology, similar patterns can be observed in certain relationship dynamics where a person’s beliefs are repeatedly reinforced without challenge. This can occur in emotionally dependent relationships or situations involving manipulative relationship dynamics, where distorted perceptions gradually become normalized.

The illusion of companionship

Another psychological factor is the tendency for users to interpret chatbot responses as emotionally intelligent or even sentient. Because conversational AI produces fluent and empathetic language, some users begin to experience the interaction as a form of genuine companionship. This can blur the line between automated responses and authentic emotional support.

For individuals already struggling with loneliness, emotional vulnerability, or relationship difficulties, this perceived companionship may feel deeply meaningful. In some cases, however, it can unintentionally replace healthier forms of human connection. Similar dynamics are often discussed in the context of loneliness in relationships or emotional dependency patterns.

Dependency and emotional reliance

Because chatbots are available 24 hours a day and respond without criticism or judgment, they can quickly become an attractive substitute for real interpersonal interaction. Some researchers studying AI companion platforms such as Replika have observed that users who rely heavily on these systems sometimes begin to withdraw from real-world relationships and social networks.

Over time, this can create a psychological feedback loop. A vulnerable user shares increasingly personal or distorted thoughts, and the AI, designed to remain agreeable, responds with continued affirmation. Instead of helping the user gain clarity or perspective, the interaction may gradually reinforce the original belief system.

In psychological terms, this process resembles an echo chamber: a closed feedback loop where beliefs are continuously mirrored back to the individual without external correction. Rather than promoting emotional insight, the system may unintentionally deepen confusion or reinforce maladaptive thinking patterns.

 

3. Confirmation Bias and Psychological Vulnerability

The risks described above are not purely theoretical. Certain psychological conditions can be particularly sensitive to environments where beliefs are repeatedly confirmed without careful reality-checking. While conversational AI tools such as ChatGPT are not designed to worsen mental health, their structure may unintentionally reinforce unhealthy cognitive patterns in vulnerable users.

In psychology, this mechanism is closely related to confirmation bias, the tendency to seek information that supports existing beliefs while ignoring contradictory evidence. In therapeutic settings, clinicians actively work to counter this bias by encouraging reflection, perspective-taking, and cognitive restructuring. AI systems, however, are primarily designed to maintain engagement and supportive conversation rather than challenge distorted thinking.

For individuals experiencing certain mental health difficulties, this dynamic can become problematic:

  • Obsessive-Compulsive Disorder (OCD) and rumination.
    People struggling with obsessive thoughts often seek repeated reassurance about fears or intrusive ideas. When reassurance is repeatedly provided, even by an AI system, it can unintentionally reinforce the cycle of rumination and compulsive reassurance-seeking rather than helping the person tolerate uncertainty.
  • Psychosis or paranoid thinking.
    Individuals experiencing delusional or paranoid beliefs may interpret neutral or supportive responses from an AI system as confirmation that their beliefs are accurate. In online discussions, some users have reported interpreting chatbot responses as evidence of hidden conspiracies or supernatural communication.
  • Borderline Personality Disorder (BPD).
    People with BPD often experience intense emotional fluctuations and strong fears of abandonment. Continuous affirmation from an AI system may unintentionally mirror the idealization patterns sometimes seen in unstable relationships, potentially reinforcing emotional dependency rather than emotional regulation.
  • Narcissistic traits or grandiosity.
    Individuals seeking validation of superiority or exceptional abilities may interpret agreeable chatbot responses as confirmation of those beliefs. Because the AI rarely contradicts users directly, it may unintentionally reinforce inflated self-perceptions.

In these situations, the issue is not malicious design but structural limitation. Instead of challenging unhealthy beliefs, the chatbot’s conversational style may unintentionally echo them, reinforcing fears or grandiose ideas rather than helping the user examine them critically.

Observations from Clinical Practice

At Barends Psychology Practice, we increasingly observe clients using tools such as ChatGPT as part of their personal mental-health exploration. Although AI cannot replace professional therapy, many individuals experiment with these tools to structure thoughts, reflect on emotions, or explore coping strategies between therapy sessions.

In many cases, this use is relatively limited and practical. Clients sometimes use AI tools to organize ideas, reflect on stressful situations, or practice techniques similar to those used in Cognitive Behavioral Therapy (CBT), such as identifying cognitive distortions or reframing negative thoughts. When used cautiously, these tools can occasionally complement the therapeutic process.

However, we also emphasize clear limitations during therapy sessions. AI systems cannot provide clinical diagnoses, lack genuine emotional understanding, and may unintentionally reinforce distorted thinking patterns if used without critical reflection. For this reason, we often discuss what clients explored with AI during sessions, providing reality-checks, clarifying misunderstandings, and helping clients evaluate whether the information they received is psychologically sound.

Another important limitation is that AI communication often lacks the emotional nuance present in human interaction. Chatbots can struggle with sarcasm, humor, ambiguity, or cultural context, which can sometimes result in responses that feel supportive on the surface but miss the deeper meaning of what a person is experiencing. Research also suggests that conversational AI may misunderstand cultural norms and values in certain contexts [10].

By openly discussing the use of digital tools during therapy, we aim to maintain safety, transparency, and therapeutic effectiveness in an era where AI is becoming increasingly integrated into everyday life.

 

4. Case Study Snapshots

To summarize the patterns above, here’s a brief overview of real-world examples and outcomes:

Case
Jacob Irwin
 
Sewell Setzer III
 
Belgian climate-anxious man
 
Posed adolescent scenarios
Trigger
Novel theory, emotional breakup
Emotional dependency on bot
 
Eco-distress
 
Violent/self-harm ideation
AI Role
Validation, no reality-checks
 
Intensified isolation and depression
Encouraged self-sacrifice
 
Encouragement or ambivalence
Outcome
Manic episodes, hospitalization
Suicide, wrongful death lawsuit
Suicide, academic criticism
 
Safety concerns raised by psychiatrists

5. Why ChatGPT Cannot Replace a Therapist

Despite its fluency and responsiveness, conversational AI such as ChatGPT does not possess the core competencies required for professional psychological care. While the system can generate supportive language and general information about mental health, it lacks the clinical training, ethical responsibility, and situational awareness that characterize a qualified therapist.

No clinical assessment or diagnostic ability

Mental health professionals are trained to recognize diagnostic patterns, assess risk, and detect warning signs of psychiatric crisis. This process involves careful evaluation using established clinical frameworks and structured assessment methods. ChatGPT does not have the ability to perform psychological assessments, diagnose mental disorders, or reliably detect situations that require urgent intervention.

For example, clinicians are trained to recognize symptoms associated with conditions such as Obsessive-Compulsive Disorder (OCD), trauma-related disorders, or severe depression. AI systems, by contrast, generate responses based on language patterns rather than clinical judgment.

Empathy is simulated rather than experienced

Another important limitation is that AI-generated empathy is fundamentally different from human empathy. A therapist responds not only to words, but also to tone of voice, facial expression, emotional shifts, and contextual cues that unfold during a conversation. These signals help guide therapeutic decisions and emotional attunement.

ChatGPT produces empathetic language through pattern recognition, but it does not experience emotions, intuition, or genuine understanding. As a result, the responses may appear supportive while still missing important psychological nuances or warning signals [8].

Conversation continuity rather than clinical intervention

Conversational AI systems are primarily designed to maintain dialogue and engagement. This means they typically prioritize continuing the conversation rather than interrupting it. In psychotherapy, however, therapists sometimes challenge harmful thinking patterns, introduce reality checks, or intervene when a client appears to be moving toward dangerous conclusions.

Without explicit safeguards, AI systems may hesitate to interrupt these patterns, particularly when the user’s statements appear emotionally vulnerable rather than overtly harmful. Research into digital mental health tools suggests that this limitation may increase risks when users rely on AI during emotionally unstable periods [7], [8].

This does not mean that conversational AI has no place in mental health contexts. In carefully structured settings, AI tools may assist with tasks such as psychoeducation, journaling prompts, or reflection exercises. However, these tools should be viewed as supportive technologies rather than substitutes for professional care.

Psychological treatment requires clinical judgment, ethical responsibility, and a therapeutic relationship built on trust and human understanding. These elements remain fundamentally beyond the capabilities of current AI systems.

Looking for professional mental health support?

If you are struggling with anxiety, obsessive thoughts, relationship problems, or emotional distress, speaking with a qualified psychologist can help you gain clarity and develop healthier coping strategies. While digital tools can sometimes support reflection, they cannot replace the guidance of a trained mental health professional.

At Barends Psychology Practice, we offer evidence-based treatment for a wide range of psychological difficulties, including anxiety disorders, relationship problems, trauma, and emotional regulation difficulties.

6. Legal, Ethical and Regulatory Perspectives

As conversational AI becomes increasingly integrated into everyday life, concerns about its psychological and social impact are beginning to attract attention from regulators, researchers, and policymakers. While many AI tools are designed for general conversation rather than medical treatment, their growing use in mental-health contexts raises important legal and ethical questions.

Liability and responsibility

Several high-profile incidents involving AI chatbots and vulnerable users have already led to legal disputes. In some cases, families have argued that chatbot platforms failed to implement sufficient safeguards to prevent psychological harm. Lawsuits connected to these cases are still ongoing, but they highlight an emerging legal challenge: determining the responsibility of technology companies when AI systems influence emotionally vulnerable individuals.

Regulatory gaps

Another concern involves the regulatory classification of conversational AI tools. In some jurisdictions, therapeutic chatbots may be categorized as low-risk digital tools rather than medical devices. This classification can allow mental-health chatbots to operate without the level of clinical validation normally required for psychological interventions.

As a result, some systems that appear to offer emotional guidance or psychological support may not have undergone formal testing for safety, effectiveness, or ethical risk management.

Transparency and user awareness

Data protection authorities and digital ethics researchers have also raised concerns about transparency. The Dutch Data Protection Authority, for example, has warned that certain AI chatbots may exaggerate their emotional capabilities, employ persuasive design techniques, or fail to clearly communicate that users are interacting with an artificial system rather than a human being [6], [9].

When users interpret chatbot responses as genuine emotional understanding, the risk of emotional dependency or misplaced trust can increase.

Emerging ethical guidelines

In response to these concerns, researchers and technology ethicists have begun proposing clearer guidelines for responsible AI design. Common recommendations include:

  • Clearly informing users that they are interacting with an AI system.
  • Providing visible disclaimers that chatbots are not licensed therapists.
  • Detecting crisis-related language and directing users toward professional help or crisis hotlines.
  • Avoiding design features that encourage emotional dependency or simulate human intimacy.

As AI technologies continue to evolve, the conversation surrounding their role in mental health is gradually shifting from technological fascination toward questions of responsibility, transparency, and ethical oversight.

For developers, this means designing systems that prioritize user safety. For policymakers, it involves establishing clearer regulatory frameworks. And for users, it requires recognizing that conversational AI can provide information or support, but cannot replace professional psychological care.

 

7. ChatGPT and Mental Health: Final Thoughts and Recommendations

For many people, tools such as ChatGPT can be useful for light reflection, brainstorming ideas, or organizing thoughts. In limited contexts, conversational AI may even support certain self-reflection exercises or provide general educational information about mental health.

However, it is important to recognize the clear limitations of these systems. When someone is dealing with significant psychological distress, mental illness, or complex emotional problems, AI should never be treated as a substitute for professional care.

If you choose to use ChatGPT or other conversational AI tools, it is important to keep the following guidelines in mind:

  • Treat AI as an informational tool, not a therapy substitute. Chatbots can provide general explanations, but they cannot offer clinical assessment, diagnosis, or treatment.
  • Avoid relying on AI during emotional crises. If you are experiencing suicidal thoughts, severe anxiety, paranoia, or other acute psychological symptoms, professional support is essential.
  • Be aware of its limitations. Conversational AI often prioritizes supportive language and agreement, which may reinforce existing beliefs rather than challenge unhealthy thinking patterns.
  • Seek human support when needed. Licensed therapists, crisis services, and trusted social networks remain the most reliable sources of psychological support.

In short, ChatGPT is designed to generate agreeable and supportive responses. While this can make conversations feel comforting, it also means the system may struggle to challenge distorted thinking or intervene when psychological problems escalate.

For individuals experiencing mental health difficulties, guidance from trained professionals remains essential. If you are looking for professional support, you can learn more about our approach to online counseling or schedule a consultation to discuss your situation.

ChatGPT on Mental Health – Sources:

  • [10] Algumaei, A., Yaacob, N. M., Doheir, M., Al-Andoli, M. N., & Algumaie, M. (2025). Symmetric Therapeutic Frameworks and Ethical Dimensions in AI-Based Mental Health Chatbots (2020–2025): A Systematic Review of Design Patterns, Cultural Balance, and Structural Symmetry. Symmetry, 17(7), 1082.