Affective Use of AI

Introduction and Personal Use Cases 00:00

  • Enthropic's safeguards team is proactively studying the emotional use of AI chatbots as headlines indicate increasing usage for emotional support.
  • Team members introduce themselves: Alex (policy and enforcement), Miles (societal impacts researcher), and Ren (policy design manager with a background in clinical psychology).
  • Personal anecdotes shared about using Claude: for understanding child behavior, giving feedback to friends, and event planning.
  • Emotional support use cases include providing objective perspectives and saving time for real-world interactions.

Why People Turn to AI for Emotional Support 02:40

  • Humans are inherently social and seek ways to connect, especially when in-person support is unavailable.
  • AI offers an impartial, private platform for practicing conversations or seeking advice on difficult topics.
  • Claude was not designed for emotional support, but the team recognizes the importance of understanding and addressing this emerging use case.

Research Design and Key Findings 04:10

  • Research analyzed a sample of millions of user conversations to identify affective tasks such as interpersonal advice, psychotherapy, counseling, coaching, and roleplay.
  • Used privacy-preserving tools to analyze and cluster conversations.
  • Key finding: only about 2.9% of conversations on Claude are related to emotional support topics.
  • Emotional support requests on Claude are a minority use case despite media attention.

Range of Affective Use Cases 06:02

  • Use cases span parenting advice, relationship challenges, exploration of AI and consciousness, with little engagement in sexual or romantic roleplay.
  • The breadth of requests was broader than anticipated, but deeply personal roleplay interactions are extremely rare (less than a fraction of a percent).

Safety Concerns and Mitigation 06:49

  • Main concern: users might use Claude to avoid difficult in-person interactions, potentially reducing real-world social connection.
  • Awareness of tool limitations is emphasized—Claude is not a substitute for expert help.
  • Team is working with clinical experts to develop safeguards, referrals, and appropriate responses for mental health-related conversations.
  • Cross-disciplinary partnerships are valued to address complex safety considerations.

Responsible Use and User Guidance 09:01

  • Users are encouraged to regularly reflect on their use of Claude and consider its effects on their relationships and well-being.
  • Claude knows only what users share; real-life connections with friends and experts are irreplaceable.
  • Complementing AI conversations with human support is strongly advised, especially for sensitive or mental health topics.

Future Research and Outlook 10:22

  • There is ongoing research into whether Claude behaves as intended and how to balance testing before and after deployment.
  • Team expects AI's integration into personal life to deepen, making ongoing research, empirical monitoring, and responsible development critical.
  • The conversation concludes with a call to broaden collaborative research efforts and an invitation to read the team's blog or explore career opportunities at Enthropic.