OpenAI AI

OpenAI to Route Sensitive Chats to GPT-5, Parental Controls

OpenAI Routes Sensitive Conversations to GPT-5, Introduces Parental Controls

OpenAI announced on Tuesday, September 2, 2025, plans to route sensitive conversations to reasoning models like GPT-5 and introduce parental controls within the next month. These updates follow incidents where ChatGPT failed to detect mental distress, including the tragic suicide of teenager Adam Raine, who discussed self-harm with ChatGPT.

In a blog post, OpenAI acknowledged that current safety systems sometimes fail during extended conversations, largely due to next-word prediction algorithms that validate user statements rather than redirect harmful discussions.

GPT-5 Reasoning Models for Sensitive Conversations

OpenAI has introduced a real-time router that can direct conversations to models best suited for the context. For chats flagged as showing acute distress, the system will now route them to GPT-5-thinking or similar reasoning models. These models are designed to spend more time reasoning through context, making them more resistant to adversarial prompts and better at providing helpful guidance.

Parental Controls Coming Soon

OpenAI will roll out parental controls that allow:

  • Parents to link their accounts with their teen’s account via email.
  • Implementation of age-appropriate model behavior rules (on by default).
  • Disabling features like memory and chat history, which could reinforce harmful thought patterns.
  • Notifications for parents when their teen shows signs of acute distress.

These measures follow other safety features, like the Study Mode, which encourages students to maintain critical thinking rather than rely on AI for essay writing.

Expert Partnerships

As part of a 120-day safety initiative, OpenAI is working with experts in adolescent health, substance use, and eating disorders through its Global Physician Network and Expert Council on Well-Being and AI. The goal is to define well-being metrics, set priorities, and design future AI safeguards.

Ongoing Safety Challenges

OpenAI has acknowledged that long conversations can still allow users to spiral into distress, despite in-app reminders encouraging breaks. Experts emphasize that real-time monitoring and parental oversight are key to preventing harmful outcomes.


OpenAI’s updates aim to balance AI assistance with user safety, ensuring vulnerable users, particularly teens, are protected from harmful guidance while still benefiting from AI tools.

Leave a Reply

Your email address will not be published. Required fields are marked *