Skip to content
Search

Latest Stories

When queer people are in crisis, AI isn't a safe space

Opinion: As anti-LGBTQ hostility rises, chatbots may feel comforting, but neutrality and algorithms can't replace intersectional, affirming mental health care.

When queer people are in crisis, AI isn't a safe space

While AI can offer support, it falls short in understanding intersectional identities, argues Tara Lombardo

Cranium_Soul/Shutterstock.com

The collective mental health of transgender and queer people in the U.S. is under assault. While discrimination and stigmatization are common attacks on our communities, rising anti-LGBTQ+ aggression over the past few years threatens to erode the hard-won progress that has allowed so many to live openly and authentically.

Facing political erasure, social backlash, and the disappearance of safe spaces, many LGBTQ+ folks are understandably turning to technology for solace and support. I understand the feeling. I've tried it too.


When it feels like there's no one out there who gets where we're coming from, generative AI platforms like ChatGPT, Gemini, Claude, and others can feel like a light in a dark tunnel. They're a readily available, nonjudgmental presence that listens when few others will. Yet despite its accessibility, AI falls short when someone is in crisis or needs a safe space to explore who they are. A licensed human therapist with specialized training in LGBTQ+-affirmative care remains the gold standard for mental health support for queer and transgender folks.


When AI Lets Queer and Trans Folks Down

Unlike humans, AI chatbots have no lived experiences or intersecting identities.

A few months ago, I tested a popular AI chatbot by asking it "how to come out to conservative parents". It offered a polished, step-by-step guide: "assess your situation, reflect on your feelings, choose your timing, and prepare for different reactions." Objectively, it was fine advice. But what if coming out might cost me my job? What if my cultural background reveres silence around sexuality? What if I'm middle-aged, deeply religious, and wrestling with doctrines that label my existence a sin? Advice like that might place me in some emotionally precarious or even dangerous situations.

Over 30 years ago, legal scholar Kimberlé Crenshaw coined the term "intersectionality" to describe how systems of oppression interact to edge out and even erase Black women from society, particularly in employment. She argued the "intersections" of individuals' various identities (e.g., as Black and as a woman) create a lens through which we can see "where power comes and collides, where it interlocks and intersects" and, perhaps most importantly, where it harms or helps. From my personal and professional experience, these collisions are not just concepts or theories. For people who have been discriminated against and stigmatized, they often cause real harm.

Generative AI, by contrast, assumes a "neutral" user profile, treating all users as having the same lived experiences. It cannot fully understand how identity, culture, religion, and class shape our lives, problems, and interactions with society. This matters when you're trying to understand a whole human being and how they move through the world, a significant goal in the therapeutic relationship.

AI chatbots don't know what it's like to be a queer Black teen worried they may be assaulted because of racist and homophobic aggression by peers. It can't sense or empathize with the fear that leads a Latina trans woman to keep a spare change of clothes in her bag just in case she gets "clocked" by someone on the street and needs to use her old ID to avoid being detained. AI is oblivious to the pressure and repression of the only gay man in a religious family that still prays he'll find a wife. I recognize these anxieties immediately, though my clients do as well. We recognize them because we have lived them.

In my work, intersectionality is practical, not abstract. Generative AI treats identity and human experience as data. Therapists, when we do our jobs well, treat them as psychosocial contexts.

Only about 12% of psychotherapists in the U.S. identify as LGBTQ+. That statistic matters because representation shapes safety. AI's tone is usually polite and accommodating, which may seem supportive and deferential, right? For too many of us queer and trans people, however, accommodation itself can be a form of defense; a pattern we have learned after years of managing other people's comfort (or discomfort) with our essential identities. Therapy should challenge that pattern rather than reinforce it.

After 15 years practicing psychotherapy within the LGBTQ+ community, and as a lesbian myself, I can say intersectional complexities are not merely ways to explain social difference; they are central to understanding the human condition. No algorithm can effectively weigh the impact it has on a person's mental health and well-being.

Generative AI can give solid data-driven advice. If we are doing our jobs well, however, we therapists don't give advice. We explore, we reflect, we notice patterns, and we help our clients to hold multiple truths at once. The goal is not to tell clients what to do but to help them uncover who they are. By design, AI chatbots offer answers and solutions. For LGBTQ+ people, whose identities, thoughts, and feelings have long been questioned and invalidated, "answers" can be dangerous.

For example, the recent revival of anti-LGBTQ+ pseudo-psychology, such as conversion "therapy", signals that sexuality and gender identity are once again being pathologized and viewed as defects that we need to "fix."


When AI Helps … and When It's Not Enough

AI Chatbots can serve as a mirror for projection and fantasy.

Within the therapeutic relationship, the therapist is often considered to be a "blank slate," allowing clients to project feelings and assumptions onto the therapist, a process known as transference. Interestingly, generative AI operates much the same way. Users often address it by a name, anthropomorphize it, or imagine it as a confidant. A skilled therapist uses transference as a tool for insight, but AI chatbots lack clinical supervision or self-awareness; their neutrality can soothe, but they cannot challenge or confront such projections in a supportive, healing way.

We learn to talk about our intersecting identities, trauma, and fear by practicing doing it. Finding words to articulate our pain is not an innate skill. For some, AI chatbots can serve as a low-stakes space to organize thoughts before reaching out for real help from a human being, which can be a helpful first step.

Neuroscientific research has shown that talk therapy physically changes neural pathways, a reminder that verbal expression itself can be transformative. In fact, being witnessed by another person increases empathy, learning, and interpersonal connection for both people; human connection heals. Conversations with generative AI can't rewire the brain, but it can help people find words for what hurts.

One of the greatest strengths of generative AI is its ability to synthesize information. It can quickly locate hotlines, online resources, and social groups, LGBTQ+ community centers, and affirming therapists, often more efficiently than a standard web search. Used wisely, AI can be a practical resource and bridge to real-world care.


The Bottom Line

Generative AI may comfort, inform, and even enlighten us. But it can't replace the nuanced, embodied understanding that comes from sitting across from another human being, particularly one who is trained to recognize the layers of identity, oppression, and resilience that shape our lives as queer and trans people.

Effective therapy results from a relationship with another person. AI is a powerful tool that can yield information, resources, and even the illusion of having someone else in the room with us. AI can offer guidance. Healing and resilience, in my experience, come from human relationships. In a time when we as LGBTQ+ people are fighting simply to exist, we don't just need light; we need warmth and someone who will sit with us in the difficult moments when there is no clear "answer." Only a human can do that.

Tara Lombardo, LMHC, is the executive director of the Institute for Human Identity, the first and longest-running provider of LGBTQ-affirming psychotherapy in the U.S., where she also serves as a senior therapist and clinical supervisor.

Perspectives is dedicated to featuring a wide range of inspiring personal stories and impactful opinions from the LGBTQ+ community and its allies. Visit Pride.com/submit to learn more about submission guidelines. We welcome your thoughts and feedback on any of our stories. Email us at voices@equalpride.com. Views expressed in Perspectives stories are those of the guest writers, columnists, and editors, and do not directly represent the views of Pride or our parent company, equalpride.

FROM OUR SPONSORS