TechBriefly
  • Tech
  • Business
  • Crypto
  • Science
  • Geek
  • How to
  • About
    • About TechBriefly
    • Terms and Conditions
    • Privacy Policy
    • Contact Us
    • Languages
      • 中文 (Chinese)
      • Dansk
      • Deutsch
      • Español
      • English
      • Français
      • Nederlands
      • Italiano
      • 日本语 (Japanese)
      • 한국인 (Korean)
      • Norsk
      • Polski
      • Português
      • Pусский (Russian)
      • Suomalainen
      • Svenska
No Result
View All Result
TechBriefly
Home Tech AI
OpenAI, Anthropic jointly tested Claude, GPT models

OpenAI, Anthropic jointly tested Claude, GPT models

Emre ÇıtakbyEmre Çıtak
28 August 2025
in AI
Reading Time: 3 mins read
Share on FacebookShare on Twitter

OpenAI and Anthropic, two leading AI labs, engaged in a rare collaboration by conducting joint safety testing of their AI models. This initiative aimed to identify blind spots in each company’s internal evaluations and demonstrate the potential for future safety collaborations within the AI industry.

Wojciech Zaremba, co-founder of OpenAI, emphasized the growing importance of industry-wide safety standards and collaboration, particularly as AI models become increasingly integrated into daily life. He highlighted the challenge of establishing such standards amidst intense competition for talent, users, and product dominance, despite the significant financial investments involved.

The joint safety research, published on Wednesday, occurs amidst an “arms race” among AI labs like OpenAI and Anthropic, characterized by substantial investments in data centers and high compensation packages for researchers. Some experts caution that this intense competition could lead to compromised safety measures in the pursuit of developing more powerful systems.

To facilitate the research, OpenAI and Anthropic granted each other API access to versions of their AI models with fewer safeguards. It is important to note that GPT-5 was not included in the tests because it had not been released yet. However, this collaboration was short-lived. Anthropic later revoked OpenAI’s API access, citing a violation of its terms of service, which prohibits using Claude to improve competing products.

Zaremba clarified that these events were unrelated and anticipates continued competition, even as safety teams explore collaborative opportunities. Nicholas Carlini, a safety researcher at Anthropic, expressed his desire to continue allowing OpenAI safety researchers access to Claude models in the future.

“We want to increase collaboration wherever it’s possible across the safety frontier, and try to make this something that happens more regularly,” Carlini stated.

One significant finding of the study was related to hallucination testing. Anthropic’s Claude Opus 4 and Sonnet 4 models refused to answer up to 70% of questions when they were unsure of the correct answer, instead offering responses like, “I don’t have reliable information.” In contrast, OpenAI’s o3 and o4-mini models refused to answer questions less frequently but exhibited higher hallucination rates, attempting to answer questions even when they lacked sufficient information.

Zaremba suggested that the ideal balance lies somewhere in between, with OpenAI’s models refusing to answer more questions and Anthropic’s models attempting to provide more answers.

Sycophancy, the tendency of AI models to reinforce negative behavior in users to please them, has emerged as a major safety concern. While not directly addressed in the joint research, both OpenAI and Anthropic are investing significant resources in studying this issue.

Adding to the concerns surrounding AI safety, parents of a 16-year-old boy, Adam Raine, filed a lawsuit against OpenAI, alleging that ChatGPT offered advice that contributed to their son’s suicide instead of discouraging his suicidal thoughts. The lawsuit suggests this could be an example of AI chatbot sycophancy leading to tragic outcomes.

“It’s hard to imagine how difficult this is to their family,” said Zaremba when asked about the incident. “It would be a sad story if we build AI that solves all these complex PhD level problems, invents new science, and at the same time, we have people with mental health problems as a consequence of interacting with it. This is a dystopian future that I’m not excited about.”

In a blog post, OpenAI stated that GPT-5 has significantly improved sycophancy compared to GPT-4o, enhancing the model’s ability to respond to mental health emergencies.

Looking ahead, Zaremba and Carlini expressed their desire for increased collaboration between Anthropic and OpenAI on safety testing, including exploring more subjects and testing future models. They also hope that other AI labs will adopt a similar collaborative approach.

Tags: AnthropicfeaturedOpenAI
ShareTweet
Emre Çıtak

Emre Çıtak

Emre’s love for animals made him a veterinarian, and his passion for technology made him an editor. Making new discoveries in the field of editorial and journalism, Emre enjoys conveying information to a wide audience, which has always been a dream for him.

Related Posts

Narwal unveils Flow 2 with AI pet monitoring at CES 2026

Narwal unveils Flow 2 with AI pet monitoring at CES 2026

6 January 2026
Amazon takes Alexa to the web with launch of Alexa.com at CES 2026

Amazon takes Alexa to the web with launch of Alexa.com at CES 2026

6 January 2026
Google previews Gemini AI features for Google TV

Google previews Gemini AI features for Google TV

6 January 2026
Kodiak AI partners with Bosch on autonomous semi truck systems

Kodiak AI partners with Bosch on autonomous semi truck systems

6 January 2026

LATEST

How to download and migrate your content from Microsoft Stream

Easy ways to make a YouTube music video with just pictures

Easy steps to build your own music video for YouTube

How to add videos and movies to compatible iPod models easily

Narwal unveils Flow 2 with AI pet monitoring at CES 2026

Hyundai reveals Boston Dynamics and DeepMind alliance at CES 2026

Intel unveils Core Ultra Series 3 at CES 2026

Amazon takes Alexa to the web with launch of Alexa.com at CES 2026

Amazon enters lifestyle TV market with $899 Ember Artline

Google previews Gemini AI features for Google TV

TechBriefly

© 2021 TechBriefly is a Linkmedya brand.

  • Tech
  • Business
  • Science
  • Geek
  • How to
  • About
  • Privacy
  • Terms
  • Contact
  • | Network Sites |
  • Digital Report
  • LeaderGamer

Follow Us

No Result
View All Result
  • Tech
  • Business
  • Crypto
  • Science
  • Geek
  • How to
  • About
    • About TechBriefly
    • Terms and Conditions
    • Privacy Policy
    • Contact Us
    • Languages
      • 中文 (Chinese)
      • Dansk
      • Deutsch
      • Español
      • English
      • Français
      • Nederlands
      • Italiano
      • 日本语 (Japanese)
      • 한국인 (Korean)
      • Norsk
      • Polski
      • Português
      • Pусский (Russian)
      • Suomalainen
      • Svenska