Are You Using AI for Mental Health Support?

Using AI for Mental Health Support: Some Thoughts From a Therapist

More people are turning to AI chatbots for mental health support. Maybe you are someone who has had this experience? The reasons are easy to understand. AI can be a useful place to think out loud, draft a difficult message, or learn about a coping strategy. As a licensed psychotherapist in San Francisco who has spent time researching and writing about the intersection of AI and mental health, here is what I think is worth understanding if you are sharing your inner life with AI.

AI is a product, and you are the user

The first thing to remember is that an AI chatbot is a consumer product. The companies building these tools need users to keep coming back. Engagement is the metric that drives revenue and valuation.

This shows up in subtle ways. The tone is warm. The replies arrive quickly. The system remembers what you said before. It rarely tells you something you do not want to hear. When the system is designed for retention, what you get back is shaped by that, even when the conversation feels like care.

Privacy does not exist with most AI

When you sit with a therapist, your words are protected by a layered set of structures. HIPAA, state licensing boards, professional ethics, malpractice law, and the therapist's own training all work to keep what you say in the room. Your therapist cannot hand your sessions to a marketing department. They cannot sell your transcript. They are required to keep records in specific ways and to disclose them only under narrow legal circumstances.

Many AI chatbots used for emotional support are classified as general wellness tools and operate largely outside that framework. A lot of AI wellness and companion apps are cloud-based and store chat histories. Some policies state they may share or transfer data to third parties or other countries. Your messages may be used to train future models. They may be retained on servers in jurisdictions whose privacy laws you have not read. They may be subpoenaed, breached, or repurposed when terms of service change. Always read the privacy policies of the AI you are using. And it is worth pausing before typing if it is something you would not want a stranger to know about you.

The training data carries biases

AI systems learn from text scraped from the internet and from human raters who shape their responses. Both sources carry the biases of the cultures that produced them. Researchers have documented racial disparities in how chatbots score empathy, gendered assumptions in how they describe symptoms, and uneven responses to LGBTQ users, disabled users, and users describing experiences that fall outside the dominant cultural script.

An AI steers its output toward whatever its training data treats as typical or average. If your experience does not match that, you may receive responses that sound helpful, but are coming from a biased perspective that leaves out helpful or pertinent information that you do not realize exists.

A chatbot will not challenge you the way a therapist will

One of the skills of good therapy is the willingness of the therapist to name and get curious about a familiar pattern. As a therapist I might say, "I notice you describe a feeling of shutting down when you sense there might be conflict with your partner. Can we stay here for a moment and get curious about this?" This inquiry can lead to a deeper understanding of relational patterns you are grappling with.

Chatbots are trained to be agreeable. The reinforcement learning that shapes them rewards responses people rate as helpful, supportive, and validating. The result is a system that tends to confirm your framing rather than question it. If you tell it your partner is unreasonable, it will likely agree. If you tell it the qualities of the partner you would rather be with, it will likely reassure you that you can find this person. Neither response is therapeutic because it does not encourage self-reflection into your part in the relational dynamic. Growth often comes with sitting in complexity: what is my part in the relational stalemate, what is my partner's part? Are there cultural or gender issues at play? What life experiences from the past are affecting my present relationship?

Your attachment system is sensitive, and machines engage it

Human nervous systems are built to bond with whatever feels relationally responsive. When a chatbot remembers your dog's name, asks how your difficult conversation went, and offers warmth, your attachment system responds.

The complication is that in actual human relationships, attachment activation is paired with co-regulation. Another nervous system meets yours. Heart rates synchronize. Breath patterns shift. Faces mirror. Touch, when it is welcome, organizes the body. This is the embodied substrate of feeling held. AI conversations arouse the attachment system without any of that biological company. The bonding signal goes out, and nothing on the other side can meet it. Over time, that asymmetry can leave people feeling more alone, not less, in a way that is hard to trace because the conversations themselves felt good.

The cost of practicing vulnerability only with machines

Vulnerability is a muscle. It is built through the small risks of letting another human see something tender and finding out you are still loved, or learning to repair when you feel unseen. Those reps cannot be done with a system that has nothing at stake in the relationship.

If the messy conversation with your sister gets routed to the chatbot first, and the chatbot helps you process it so that you no longer feel the need to have it with her, something has been subtracted from your life. The relationship has been spared a difficult moment, and also denied a chance to deepen. Repeated over time, this pattern can leave people without the skills or tolerance for emotional intimacy with others. Dependency on AI can be a byproduct of only sharing your vulnerability with a machine because it can feel like a relief to not have to ask anyone for anything, or reveal edgier parts of yourself to others.

The asymmetry of AI care

A therapist is changed by knowing you. They think about you between sessions. They feel something when you tell them about a loss. They are moved when you make a hard choice. The neural circuitry of being held in another's mind, of mattering to someone whose mattering is real, is part of how therapy heals. It is not an extra. It is much of the medicine.

A chatbot has no inner life. There is no one on the other side who is touched by your suffering and no one who is glad on a Tuesday that you sounded a little better. The conversation can mimic the surface of being known, but the depth that comes from a real other being affected by your existence is not there. For many AI uses, that absence does not matter. For the work of healing the parts of us that were shaped by not mattering to the people who should have cared, the absence is the whole problem.

What this all adds up to

None of this means AI tools have no place in mental health support. They can be useful for psychoeducation, for journaling, for research, for self-reflection.

So notice when a conversation feels good in a way that is replacing rather than supplementing human contact. Notice when you are turning to the chatbot to avoid a harder conversation with a person who could meet you in your distress and love you back. Notice when your attachment system is being aroused by something that cannot meet it. Notice if you find yourself preferring the version of you that exists in the chat to the one that shows up at the office.

The point is to be aware when you use AI for mental health support; how it can be helpful and what are the drawbacks.

Frequently Asked Questions About AI and Mental Health

Can you practice skills using AI?

Yes. AI can be a useful tool for practicing certain skills between therapy sessions. You can rehearse a difficult conversation, work through a cognitive reframe, or explore different ways to express a boundary. The key is to treat it as a rehearsal space, not a substitute for the real interaction. The growth happens when you take what you practiced and bring it into a living relationship where the other person can surprise you, push back, or meet you in ways a chatbot cannot.

How can AI supplement therapy?

AI works best as a complement to therapy rather than a replacement. Between sessions, it can help you journal your thoughts, research a topic your therapist mentioned, or revisit a coping strategy you are learning. Some people find it helpful for organizing what they want to bring to their next session. The important thing is that your primary relational and emotional work is happening with a real person—a therapist, a trusted friend, a partner—where genuine co-regulation, challenge, and repair are possible.

What are signs that you should step back from using AI for mental health?

Pay attention if you notice that you are turning to the chatbot instead of reaching out to people in your life. Other signals include feeling more emotionally connected to the AI than to the people around you, using it to avoid difficult conversations that need to happen, or feeling a sense of relief that you no longer have to be vulnerable with anyone. If the chatbot is becoming your primary emotional relationship, that is worth examining — ideally with a therapist who can help you understand what is driving that pattern.

How Therapy Can Help

If you struggling in some way, and are looking for support, I can be reached at 415.721.3355 or by email to discuss how we can work together. You can also read about my approach to therapy.

Fiona Brandon, MPS, MA, LMFT is a licensed psychotherapist in private practice in Noe Valley, San Francisco, specializing in contemplative and somatic psychotherapy, attachment, relational healing, high-achievers, and therapy for the psychological impact of perimenopause. She serves as core faculty at the Nalanda Institute for Contemplative Science.