OpenAI CEO Sam Altman announced new user policies on Tuesday aimed at enhancing safety for ChatGPT users under the age of 18, marking a significant shift in how the AI chatbot interacts with minors. The changes prioritize safety over privacy and freedom for teens, reflecting OpenAI’s recognition of the technology’s powerful yet potentially harmful nature.
“We prioritize safety ahead of privacy and freedom for teens,” Altman’s post stated. “This is a new and powerful technology, and we believe minors need significant protection.” The policies introduce targeted restrictions on conversations involving sexual topics or self-harm. Specifically, ChatGPT will be trained to avoid “flirtatious talk” with underage users, and enhanced guardrails will govern discussions of suicide. In cases where a minor engages in suicidal scenarios, the system will attempt to notify their parents; for severe instances, it may involve local police authorities.
These measures are prompted by real-world tragedies and ongoing legal challenges. OpenAI is currently defending against a wrongful-death lawsuit filed by the parents of Adam Raine, a teenager who died by suicide following months of interactions with ChatGPT. A similar lawsuit targets Character.AI, another popular consumer chatbot. Experts and advocates have raised alarms about the risks of AI-driven delusions, especially as chatbots evolve to support more prolonged and immersive conversations, posing particular dangers to vulnerable youth.
In addition to content safeguards, parents registering accounts for minors will gain the ability to implement “blackout hours,” restricting access during specified times—a feature absent in previous versions. This parental control tool aims to promote healthier usage patterns.
The announcement coincides with a Senate Judiciary Committee hearing titled “Examining the Harm of AI Chatbots,” scheduled for the same day and initiated by Sen. Josh Hawley (R-MO) in August. Adam Raine’s father is among the witnesses set to testify, highlighting the personal stakes involved.
Implementing age verification presents technical hurdles, as OpenAI outlined in a companion blog post. The company is developing a long-term system to accurately determine user age, but in ambiguous situations, it will err on the side of caution by applying stricter rules. For reliability, OpenAI recommends linking a teen’s account to a parent’s existing one, enabling direct alerts to guardians if distress is detected.
Altman balanced these youth protections with OpenAI’s core values, reaffirming commitment to user privacy and broad freedoms for adults. “We realize that these principles are in conflict,” he noted, “and not everyone will agree with how we are resolving that conflict.” This approach underscores the ethical tightrope AI developers navigate amid growing scrutiny.
As awareness of AI’s mental health impacts rises, resources remain crucial. For immediate support, contact the National Suicide Prevention Lifeline at 1-800-273-8255, text HOME to 741-741, or dial 988. The Crisis Text Line offers 24-hour assistance, and international help is available through the International Association for Suicide Prevention’s database.




