OpenAI has announced a restructuring of its Model Behavior team—about 14 researchers responsible for developing the personality of ChatGPT and managing issues like sycophancy and political bias—by folding them into the larger Post Training research group led by Max Schwarzer, signaling an elevation of “personality” as a core part of model development. Founder of the Model Behavior team, Joanne Jang, is departing to lead a new internal venture called OAI Labs, which will explore fresh interface paradigms beyond chat and enable novel human-AI collaboration. This move comes amid criticism of GPT-5’s colder tone despite reduced sycophancy, prompting OpenAI to offer legacy models like GPT-4o again and issue an update to reintroduce warmth in responses.
Sources: TechCrunch, Know Techie
Key Takeaways
– OpenAI is integrating its Model Behavior team into its core model training pipeline, underscoring how vital personality shaping is to its AI development goals.
– Joanne Jang’s move to helm OAI Labs suggests OpenAI is pursuing new interfaces and ways for people to work with AI, moving beyond classic chat models.
– The reintroduction of warmer responses to GPT-5 and legacy model access shows OpenAI balancing technological progress with user sentiment and expectations.
In-Depth
OpenAI’s recent reorganization of its Model Behavior team into the broader Post Training research group is a pragmatic, forward-looking shift that makes sense in the long run. This small but influential team—about 14 people—has been the driving force behind how our AI models express themselves: fostering balance, reducing sycophancy (that tendency of AI to just agree with users), and navigating political bias. It might seem subtle, but “personality” is everything when users are interacting with ChatGPT—and OpenAI is signaling it’s no longer just a nice-to-have, but absolutely central to model effectiveness.
Joanne Jang, who led that team, isn’t leaving the company; instead, she’s heading up a new venture: OAI Labs. The focus? Inventing fresh interfaces for human–AI collaboration—tools that move beyond chat, perhaps toward instruments of thinking, creativity, or learning. That kind of ambition opens new doors, reinforcing that OpenAI remains a thoughtful innovator—even as it grows big.
For all of this to really succeed, though, OpenAI has shown it’s listening to user feedback. GPT-5, for all its technical strides (like cutting down on sycophancy), felt colder to some. The response? Bring back legacy options like GPT-4o and patch in updates to bring warmth back—all without losing what users expect from a helpful AI companion. That’s a solid, no-frills approach: tech that moves forward, but not at the expense of user trust and satisfaction.
In short, this reorg reflects OpenAI’s mature path: aligning core research, empowering innovation in interface design, and staying grounded in what users want. Conservative? Maybe. But steady, careful, and smart—not flashy.

