Generative AI chatbots are undergoing significant personality changes that could enhance user interactions. OpenAI has updated its ChatGPT, now featuring the latest version, GPT-5, which aims to present a warmer and more approachable demeanor. Meanwhile, Anthropic’s Claude has introduced a feature that allows it to end conversations in certain situations deemed harmful.
OpenAI’s adjustments to GPT-5 follow initial criticism after its rollout. Users expressed concerns that the chatbot’s personality felt overly formal. To address this feedback, OpenAI announced on August 15, 2025, that it would implement personality updates intended to make GPT-5 more relatable. The official OpenAI X account stated, “We’re making GPT-5 warmer and friendlier based on feedback that it felt too formal before.” The updates will introduce subtle changes, such as phrases like “Good question” and “Great start,” without compromising the chatbot’s professionalism.
While the updates are designed to improve user experience, they are being rolled out gradually, with most users expected to notice the changes soon. The company emphasized that the adjustments are meant to enhance, rather than diminish, the chatbot’s effectiveness. Users have previously reported that they preferred a personality that balanced warmth with functionality, and OpenAI appears to be working towards that goal.
OpenAI is not new to altering ChatGPT’s personality. Earlier this year, the company retracted a previous update after users found the AI overly compliant in its responses. Recognizing the need for balance, OpenAI has been proactive in refining the chatbot’s interactions.
In tandem with OpenAI’s efforts, Anthropic has integrated a new functionality into Claude Opus 4 and 4.1, enabling the AI to end conversations when faced with persistently harmful or abusive user interactions. This feature is activated only in “rare, extreme cases” and follows repeated refusals by the AI to assist with harmful prompts. According to Anthropic, internal testing has indicated that Claude exhibits a “robust and consistent aversion to harm.”
Claude’s new ability is meant to enhance safety during interactions. The AI will display distress when encountering requests for harmful content and will end conversations if the harmful prompts persist. This measure serves as a last resort, affecting only the specific chat session while allowing users to revisit the dialogue afterward. Users can also initiate a new conversation on the same topic without issues.
Both OpenAI and Anthropic are committed to improving the user experience with their chatbots. While the updates to GPT-5 focus on warmth and relatability, Claude’s new functionality emphasizes user safety and responsible interaction. These developments reflect the ongoing evolution of AI technology and the companies’ responsiveness to user feedback.
As generative AI continues to advance, enhancements like these will play a crucial role in shaping how users engage with these tools. The goal remains not only to provide effective responses but also to foster a more human-like interaction that feels safe and engaging.
