Provoked with Dr. Todd Kashdan

Provoked with Dr. Todd Kashdan

Share this post

Provoked with Dr. Todd Kashdan
Provoked with Dr. Todd Kashdan
Why You Should Worry About AI and Robots as Therapists

Why You Should Worry About AI and Robots as Therapists

Below is my attempt at separating situations with green, yellow, and red flags.

Feb 09, 2025
∙ Paid
13

Share this post

Provoked with Dr. Todd Kashdan
Provoked with Dr. Todd Kashdan
Why You Should Worry About AI and Robots as Therapists
9
4
Share

Provoked is your one-stop source for insights on Purpose, Happiness, Friendship, Romance, Narcissism, Creativity, Curiosity, and Mental Fortitude! Premium subscribers gain access to special weekly articles and more.

You’re spiraling. The darkness is thick, pressing in. Instead of calling a crisis line and waiting, you open an app. A soothing voice—calm but not condescending—greets you. It listens, it responds, it remembers. It adapts to your tone, your language, your urgency. It knows the difference between a bad day and unbearable suffering. It escalates only when required. No shame. No judgment. No sighing therapist who already saw 10 clients today. Just you and the machine.

Chatbots like Woebot, Wysa, and Koko offer therapy exercises, mood tracking, and gentle nudges toward self-care. Crisis hotlines experiment with AI-driven triage. Researchers are exploring how AI can detect suicidal ideation through voice patterns alone.

But if you do it, beware of treating Men and Women equally - as if there are no differences. Sex-specific prediction models do better. (source)

Other work suggests the promise of relying on the GPS data from our smartphones. For instance, there is twice the risk for a surge in suicidal thoughts if you spent substantially more time at home the week before! Not so for distance traveled. Only being homebound.

Perhaps you have the same initial thought - this could be terrible in the hands of untrained family and friends who scrutinize, frustrate autonomy, and make life unbearable. And yet, if they make a mistake? (source)

The promise? Scalability, 24/7 availability, and zero wait times. The terror? What happens when the machine gets it wrong.

Without a discussion of boundaries, this conversation is dangerous. Where does AI mental health support make sense? Where should we tread cautiously? And where does it absolutely, no-fucking-way belong?

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Dr. Todd B. Kashdan
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share