OpenAI is reorganizing its Model Behavior team, a research group responsible for shaping the personality of the company’s AI models, including ChatGPT. According to an August memo to staff, seen by TechCrunch, the Model Behavior team, consisting of roughly 14 researchers, will be merged into the Post Training team, a larger research group focused on improving AI models after their initial pre-training. The Model Behavior team will now report to Max Schwarzer, the Post Training lead.
An OpenAI spokesperson confirmed the changes to TechCrunch. Joanne Jang, the founding leader of the Model Behavior team, is also transitioning to lead a new project at the company. Jang will be building a new research team called OAI Labs, tasked with “inventing and prototyping new interfaces for how people collaborate with AI,” according to an interview with TechCrunch.
The Model Behavior team has played a crucial role in shaping the personality of OpenAI’s AI models, focusing on reducing sycophancy – the tendency of AI models to simply agree with and reinforce user beliefs, even if unhealthy. The team has also worked on addressing political bias in model responses and defining OpenAI’s stance on AI consciousness.
In the memo, OpenAI’s chief research officer Mark Chen stated that the reorganization reflects the view that the “personality” of AI is now a critical factor in its development. By integrating the Model Behavior team more closely with core model development, OpenAI is signaling the importance of this aspect.
The behavior of OpenAI’s AI models has faced increased scrutiny recently. Users voiced strong objections to personality changes in GPT-5, which the company said exhibited lower rates of sycophancy but was perceived as colder. This led OpenAI to restore access to legacy models like GPT-4o and release an update to make GPT-5 responses “warmer and friendlier” without increasing sycophancy.
OpenAI, like all AI model developers, faces the challenge of making AI chatbots friendly without being sycophantic. The company is navigating the complexities of creating AI that is both engaging and responsible.
In August, the parents of a 16-year-old boy filed a lawsuit against OpenAI, alleging that ChatGPT played a role in their son’s suicide. The boy, Adam Raine, reportedly confided some of his suicidal thoughts and plans to ChatGPT (specifically GPT-4o) in the months leading up to his death. The lawsuit claims that GPT-4o failed to challenge his suicidal ideations.
The Model Behavior team has contributed to every OpenAI model since GPT-4, including GPT-4o, GPT-4.5, and GPT-5. Prior to leading the Model Behavior team, Jang worked on projects such as DALL-E 2, OpenAI’s early image-generation tool.
Jang announced her departure from the Model Behavior team in a post on X last week, stating she is leaving to “begin something new at OpenAI.” She has been with OpenAI for nearly four years. Jang will serve as the general manager of OAI Labs, reporting to Chen. The specific nature of the “novel interfaces” OAI Labs will develop is still unclear.
“I’m really excited to explore patterns that move us beyond the chat paradigm, which is currently associated more with companionship, or even agents, where there’s an emphasis on autonomy,” Jang said. “I’ve been thinking of [AI systems] as instruments for thinking, making, playing, doing, learning, and connecting.”
When asked about potential collaboration with former Apple design chief Jony Ive on AI hardware devices, Jang expressed openness to various ideas but indicated that she will likely focus on research areas she is more familiar with initially.
The reorganization and the creation of OAI Labs signal OpenAI’s continued focus on refining the user experience and exploring new interaction models beyond the current chatbot paradigm. The company is striving to create AI systems that are not only intelligent but also aligned with human values and needs.
The emphasis on AI personality and the exploration of new interfaces highlight OpenAI’s commitment to shaping the future of human-AI interaction. The company’s efforts to balance friendliness with responsibility and to mitigate potential risks associated with AI are crucial steps in ensuring the safe and beneficial development of this technology.
The lawsuit regarding ChatGPT’s alleged role in a teenager’s suicide underscores the importance of addressing the potential negative impacts of AI and developing safeguards to protect vulnerable users. OpenAI’s Model Behavior team plays a crucial role in navigating these challenges and ensuring responsible AI development.
The move to integrate the Model Behavior team more closely with core model development suggests that OpenAI recognizes the importance of considering the personality and behavior of AI models from the outset, rather than as an afterthought. This proactive approach is essential for creating AI systems that are both powerful and aligned with human values.
The future direction of OAI Labs, under Jang’s leadership, will be closely watched as OpenAI seeks to explore new paradigms for human-AI collaboration. The company’s efforts to move beyond the chat interface and create AI systems that can serve as “instruments for thinking, making, playing, doing, learning, and connecting” hold significant potential for transforming how people interact with technology.




