#8 Beyond Quick Fixes: Building Real Agency for AI

Show notes

  • Why AI’s “empathetic tone” can be misleading
  • Case studies: NEDA’s chatbot, Snapchat’s My AI, biased hospital algorithms, predictive policing, and Koko’s mental-health trial
  • What emotional maturity means in AI contexts
  • Why accountability, escalation, and human oversight are non-negotiable

Key Insight Empathic text ≠ care, wisdom, or responsibility. The real risk lies in confusing style with substance.

Listen if you want to learn:

  • Why empathy cues lower vigilance
  • How quick fixes can backfire in AI safety
  • What deep solutions look like for responsible AI

New comment

Your name or nickname, will be shown publicly
At least 10 characters long
By submitting your comment you agree that the content of the field "Name or nickname" will be stored and shown publicly next to your comment. Using your real name is optional.