Why You Should Worry About AI and Robots as Therapists
Below is my attempt at separating situations with green, yellow, and red flags.
Provoked is your one-stop source for insights on Purpose, Happiness, Friendship, Romance, Narcissism, Creativity, Curiosity, and Mental Fortitude! Premium subscribers gain access to special weekly articles and more.
You’re spiraling. The darkness is thick, pressing in. Instead of calling a crisis line and waiting, you open an app. A soothing voice—calm but not condescending—greets you. It listens, it responds, it remembers. It adapts to your tone, your language, your urgency. It knows the difference between a bad day and unbearable suffering. It escalates only when required. No shame. No judgment. No sighing therapist who already saw 10 clients today. Just you and the machine.
Chatbots like Woebot, Wysa, and Koko offer therapy exercises, mood tracking, and gentle nudges toward self-care. Crisis hotlines experiment with AI-driven triage. Researchers are exploring how AI can detect suicidal ideation through voice patterns alone.

Other work suggests the promise of relying on the GPS data from our smartphones. For instance, there is twice the risk for a surge in suicidal thoughts if you spent substantially more time at home the week before! Not so for distance traveled. Only being homebound.

The promise? Scalability, 24/7 availability, and zero wait times. The terror? What happens when the machine gets it wrong.
Without a discussion of boundaries, this conversation is dangerous. Where does AI mental health support make sense? Where should we tread cautiously? And where does it absolutely, no-fucking-way belong?