OpenAI Introduces Trusted Contact Feature to Boost ChatGPT User Security
- OpenAI has introduced a new safety feature for ChatGPT called Trusted Contact, designed to enhance user support during moments of potential distress.
- The Trusted Contact feature works by sending an automated notification to the designated contact when ChatGPT identifies language or behavior that may indicate self-harm.
- This initiative follows OpenAI’s September 2025 rollout of parental controls, which allow parents to monitor and receive safety notifications about their teens’ interactions with ChatGPT.
OpenAI has introduced a new safety feature for ChatGPT called Trusted Contact, designed to enhance user support during moments of potential distress. The feature allows adult users to designate a trusted individual—such as a friend or family member—to receive alerts if the AI detects signs of self-harm or emotional crisis in a conversation. This move marks a significant expansion of OpenAI’s efforts to balance user privacy with proactive safety measures, especially in light of ongoing legal and ethical challenges surrounding AI and mental health.
The Trusted Contact feature works by sending an automated notification to the designated contact when ChatGPT identifies language or behavior that may indicate self-harm. The alert encourages the contact to reach out to the user, providing an additional layer of support beyond the AI’s own responses. OpenAI emphasizes that these alerts are brief and do not include detailed conversation content, in order to protect user privacy.
This initiative follows OpenAI’s September 2025 rollout of parental controls, which allow parents to monitor and receive safety notifications about their teens’ interactions with ChatGPT. The company has faced increasing scrutiny and lawsuits from families of individuals who have expressed distress or harm-related intentions after using the platform. In response, OpenAI has been working with mental health experts to refine its detection and response protocols, including new evaluation methods that simulate extended conversations about emotional distress.
Trusted Contact is optional and does not require users to link multiple accounts. It is part of OpenAI’s broader commitment to improving AI’s role in supporting individuals during difficult moments. The company continues to collaborate with clinicians, researchers, and policymakers to refine these safeguards, aiming to create a more responsive and empathetic AI environment.
While the feature is a step toward addressing the ethical responsibilities of AI platforms, it also raises questions about the balance between user privacy and the need for intervention in crises. OpenAI’s approach reflects a growing industry trend toward integrating human oversight and support networks into digital interactions, particularly in sensitive areas such as mental health.
For now, the Trusted Contact feature is available to adult ChatGPT users, with OpenAI promising further updates as the system evolves. The company’s ongoing safety work, including partnerships with expert councils and global physician networks, underscores its dedication to making AI tools safer and more supportive for all users.
