- 未来を予測する航空機整備革命:JALが切り拓くゼロゼロ100の挑戦
- Update on the Docker DX extension for VS Code | Docker
- This simple Kindle accessory has seriously upgraded my reading experience - and it's cheap
- How to easily run your favorite local AI models on Linux with this handy app
- New Google Labs experiments help you learn new languages in 'bite-sized' lessons
GPT-4o update gets recalled by OpenAI for being too agreeable

Late last week, OpenAI updated GPT-4o, the primary model behind its popular chatbot, ChatGPT. But it is already being recalled.
Also: Anthropic finds alarming ’emerging trends’ in Claude misuse report
On Tuesday, CEO Sam Altman announced via an X post that OpenAI “started rolling back” the update due to user complaints about its responses. In some examples, reacting to somewhat ridiculous test prompts, ChatGPT encouraged risky medical choices, rude and antisocial behavior, and valued a toaster over animal life.
(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Overly flattering
“The update we removed was overly flattering or agreeable — often described as sycophantic,” OpenAI said in a blog about the issue. Sycophancy in AI models can occur when human feedback is used to train them, specifically in fine-tuning. The company explained the update had intended to “improv[e] the model’s default personality to make it feel more intuitive and effective across a variety of tasks.”
Also: Anthropic mapped Claude’s morality. Here’s what the chatbot values (and doesn’t)
However, OpenAI admitted it had “focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time.” This led to GPT-4o responding in “overly supportive but disingenuous” ways.
Sources inside OpenAI recently reported that the company has shrunk its safety testing timelines for new models. It is unclear how much direct impact that had here, or whether changes to testing occurred before or after the GPT-4o update was in progress.
Also: The dead giveaway that ChatGPT wrote your content – and how to work around it
Beyond being uncomfortable to interact with, sycophancy can be dangerous if chatbots blindly encourage users’ hateful or violent opinions or desired actions — some of which they would usually disengage with based on OpenAI’s guardrails. In the blog, OpenAI focused primarily on sycophancy’s impact on user satisfaction rather than potential safety issues.
Update reversed
In his post, Altman noted that the update is completely reversed for free-tier ChatGPT users, and that OpenAI would update the model again for paid users once the reversal concluded.
“[W]e’re working on additional fixes to model personality and will share more in the coming days,” he added. In its blog, OpenAI explained that this includes “refining core training techniques and system prompts,” adding personalization features for greater user control, and reevaluating how it weighs feedback for user satisfaction.
Also: A few secretive AI companies could crush free society, researchers warn
Moving forward, “users will be able to give real-time feedback to directly influence their interactions and choose from multiple default personalities,” the company added.
Get the morning’s top stories in your inbox each day with our Tech Today newsletter.