Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Utah Launches First-Ever AI Prescription Pilot in the U.S., Sparking Debate on Safety and Innovation

    January 13, 2026

    EU Widens Tech Crackdown, Targeting Musk’s Grok and TikTok Over Alleged AI Law Violations

    January 13, 2026

    Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

    January 12, 2026
    Facebook X (Twitter) Instagram
    • Tech
    • AI News
    Facebook X (Twitter) Instagram Pinterest VKontakte
    TallwireTallwire
    • Tech

      Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

      January 12, 2026

      Wearable Health Tech Could Create Over 1 Million Tons of E-Waste by 2050

      January 12, 2026

      Viral Reddit Food Delivery Fraud Claim Debunked as AI Hoax

      January 12, 2026

      Activist Erases Three White Supremacist Websites onstage at German Cybersecurity Conference

      January 12, 2026

      AI Adoption Leaders Pull Ahead, Leaving Others Behind

      January 11, 2026
    • AI News
    TallwireTallwire
    Home»AI News»OpenAI Shuffles Its Model-Behavior Researchers, Launches New Labs Initiative
    AI News

    OpenAI Shuffles Its Model-Behavior Researchers, Launches New Labs Initiative

    Updated:December 25, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    OpenAI Shuffles Its Model-Behavior Researchers, Launches New Labs Initiative
    OpenAI Shuffles Its Model-Behavior Researchers, Launches New Labs Initiative
    Share
    Facebook Twitter LinkedIn Pinterest Email

    OpenAI has announced a restructuring of its Model Behavior team—about 14 researchers responsible for developing the personality of ChatGPT and managing issues like sycophancy and political bias—by folding them into the larger Post Training research group led by Max Schwarzer, signaling an elevation of “personality” as a core part of model development. Founder of the Model Behavior team, Joanne Jang, is departing to lead a new internal venture called OAI Labs, which will explore fresh interface paradigms beyond chat and enable novel human-AI collaboration. This move comes amid criticism of GPT-5’s colder tone despite reduced sycophancy, prompting OpenAI to offer legacy models like GPT-4o again and issue an update to reintroduce warmth in responses.

    Sources: TechCrunch, Know Techie

    Key Takeaways

    – OpenAI is integrating its Model Behavior team into its core model training pipeline, underscoring how vital personality shaping is to its AI development goals.

    – Joanne Jang’s move to helm OAI Labs suggests OpenAI is pursuing new interfaces and ways for people to work with AI, moving beyond classic chat models.

    – The reintroduction of warmer responses to GPT-5 and legacy model access shows OpenAI balancing technological progress with user sentiment and expectations.

    In-Depth

    OpenAI’s recent reorganization of its Model Behavior team into the broader Post Training research group is a pragmatic, forward-looking shift that makes sense in the long run. This small but influential team—about 14 people—has been the driving force behind how our AI models express themselves: fostering balance, reducing sycophancy (that tendency of AI to just agree with users), and navigating political bias. It might seem subtle, but “personality” is everything when users are interacting with ChatGPT—and OpenAI is signaling it’s no longer just a nice-to-have, but absolutely central to model effectiveness. 

    Joanne Jang, who led that team, isn’t leaving the company; instead, she’s heading up a new venture: OAI Labs. The focus? Inventing fresh interfaces for human–AI collaboration—tools that move beyond chat, perhaps toward instruments of thinking, creativity, or learning. That kind of ambition opens new doors, reinforcing that OpenAI remains a thoughtful innovator—even as it grows big. 

    For all of this to really succeed, though, OpenAI has shown it’s listening to user feedback. GPT-5, for all its technical strides (like cutting down on sycophancy), felt colder to some. The response? Bring back legacy options like GPT-4o and patch in updates to bring warmth back—all without losing what users expect from a helpful AI companion. That’s a solid, no-frills approach: tech that moves forward, but not at the expense of user trust and satisfaction. 

    In short, this reorg reflects OpenAI’s mature path: aligning core research, empowering innovation in interface design, and staying grounded in what users want. Conservative? Maybe. But steady, careful, and smart—not flashy.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleOpenAI’s Surging Valuation Marks New Peak as Private Unicorn
    Next Article OpenAI Snaps Up Statsig for $1.1B, Reshapes Application Leadership

    Related Posts

    EU Widens Tech Crackdown, Targeting Musk’s Grok and TikTok Over Alleged AI Law Violations

    January 13, 2026

    Utah Launches First-Ever AI Prescription Pilot in the U.S., Sparking Debate on Safety and Innovation

    January 13, 2026

    Microsoft Warns of a Surge in Phishing Attacks Exploiting Misconfigured Email Systems

    January 12, 2026

    Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

    January 12, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

    January 12, 2026

    Wearable Health Tech Could Create Over 1 Million Tons of E-Waste by 2050

    January 12, 2026

    Viral Reddit Food Delivery Fraud Claim Debunked as AI Hoax

    January 12, 2026

    Activist Erases Three White Supremacist Websites onstage at German Cybersecurity Conference

    January 12, 2026
    Top Reviews
    Tallwire
    Facebook X (Twitter) Instagram Pinterest YouTube
    • Tech
    • AI News
    © 2026 Tallwire. Optimized by ARMOUR Digital Marketing Agency.

    Type above and press Enter to search. Press Esc to cancel.