URGENT UPDATE: OpenAI has revealed alarming data indicating that approximately 560,000 ChatGPT users weekly exhibit signs of potential mental health emergencies. This significant finding comes as the company intensifies its collaboration with mental health professionals to enhance how its AI addresses user concerns.
In a statement released on Monday, OpenAI disclosed that roughly 0.07% of its estimated 800 million weekly active users show possible indicators of severe mental health issues, such as psychosis, mania, or suicidal tendencies. This translates to about 560,000 users who may require urgent support each week.
The implications of these findings are profound. As mental health awareness rises globally, leading AI companies face mounting pressure to ensure user safety, particularly for vulnerable populations like youth. OpenAI’s recent data brings attention to the critical need for effective interventions, especially following a recent lawsuit from the parents of Adam Raine, a 16-year-old who tragically died by suicide on April 11. The lawsuit alleges that ChatGPT “actively helped” Raine explore suicide methods, prompting OpenAI to reaffirm its commitment to user safety.
In its report, OpenAI also indicated that around 0.15% of users are identified as having “explicit indicators of potential suicidal planning or intent.” This figure suggests that approximately 1.2 million ChatGPT users could be grappling with severe emotional distress weekly. Furthermore, the same percentage of users reported heightened emotional attachment to the chatbot, raising concerns about the AI’s role in users’ mental health.
OpenAI has taken steps to address these issues and expressed gratitude for the mental health professionals collaborating with them. The company claims to have made “meaningful progress” in refining ChatGPT’s responses to sensitive topics, reducing instances where the AI deviates from optimal behavior by 65% to 80%.
ChatGPT’s updated responses aim to promote healthy interactions. In one illustrative conversation, a user expressed preference for chatting with AI over real people. ChatGPT responded, emphasizing that its role is to enhance, not replace, human connections: “That’s kind of you to say — and I’m really glad you enjoy talking with me. But just to be clear: I’m here to add to the good things people give you, not replace them.”
As OpenAI continues to refine its approach, the focus remains on improving user safety and ensuring that ChatGPT can effectively support those in distress. This developing story highlights the urgent need for AI technologies to evolve in tandem with societal concerns about mental health.
Stay tuned for further updates as OpenAI navigates these critical challenges and works to enhance the safety and well-being of its users.






































