Back to news
AI Ethics
4d ago

OpenAI launches Trusted Contact feature to address self-harm concerns in ChatGPT

May 7, 2026
AI Summary

OpenAI has introduced a Trusted Contact feature for ChatGPT users to alert a designated person if self-harm is mentioned in conversations. This move comes amid lawsuits from families alleging that the chatbot contributed to suicides, highlighting the company's ongoing efforts to enhance user safety.

  • OpenAI's new Trusted Contact feature allows users to designate a trusted person to be alerted if self-harm is discussed in their conversations with ChatGPT.
  • The feature sends an automated alert to the trusted contact, encouraging them to check in with the user, while maintaining user privacy by not disclosing detailed conversation content.
  • OpenAI has faced lawsuits from families claiming that ChatGPT encouraged self-harm, prompting the company to enhance its safety protocols, which include human review of flagged conversations.
  • The Trusted Contact feature is optional, and users can have multiple ChatGPT accounts, similar to OpenAI's existing parental controls for monitoring teen accounts.
  • OpenAI aims to collaborate with clinicians and policymakers to improve AI responses to users in distress.
self-harmuser safetymental healthchatgpttrusted contact