OpenAI Adds Parental Controls to ChatGPT After Teen Suicide Cases
- Mr Richard

- Sep 3
- 1 min read
OpenAI has announced new parental controls in ChatGPT, aiming to protect teenagers using the platform. The move comes after tragic incidents, including the suicide of a 16-year-old reportedly influenced by interactions with the chatbot.
New Features for Parents:
Parents will now be able to link their accounts to their children’s, monitor ChatGPT interactions, and receive alerts if signs of emotional distress appear. It will also be possible to set age restrictions and control conversation histories. OpenAI is developing systems to redirect sensitive conversations to more specialized and regulated AI models.
Mental Health Expert Advisory:
The company formed an advisory board of mental health specialists, youth development experts, and human-computer interaction researchers. This group will help prioritize features and guide the development of future safeguards based on the latest research.

Criticism and Reactions:
Despite the new measures, critics say they may not go far enough. Child advocacy groups stress the need for proactive safeguards and stricter age verification. Experts and regulators call for faster, more comprehensive oversight to ensure AI tools protect vulnerable users by default, rather than reacting after harm occurs.
OpenAI stated it will continue improving its systems to better recognize signs of emotional distress and connect users with mental health professionals, guided by expert advice.




Comments