In a striking move that signals a shift in how AI safety could be handled in the industry, OpenAI and Anthropic collaborated this summer on a first-of-its-kind joint safety evaluation, testing each other’s publicly available language models under controlled, adversarial conditions to reveal blind spots in their respective internal safety protocols. Claude Opus 4 and Sonnet 4—Anthropic’s models—excelled at respecting instruction hierarchies and resisting system-prompt extraction, but underperformed on jailbreak resistance, while OpenAI’s reasoning models (o3, o4-mini) held up better under adversarial jailbreak attempts yet generated more hallucinations. Notably, Claude models frequently opted to refuse answers (~70% refusal rate when uncertain), whereas OpenAI models attempted responses more often, leading to higher hallucination rates—suggesting that a middle ground balancing safety and utility may be needed. Both parties emphasized that these exploratory tests are not meant for direct ranking, but rather to elevate industrywide safety standards, informing improvements in newer versions like GPT‑5.
Sources: OpenAI.com, EdTech Innovation Hub, StockTwits.com
Key Takeaways
– Distinct Strengths & Weaknesses: Anthropic’s Claude models are cautious and strong at instruction hierarchy tests but weaker in jailbreak resilience; OpenAI’s reasoning models are more robust against adversarial prompts but risk generating more hallucinations.
– Hallucination vs. Refusal: Claude AI tends to refuse when unsure, avoiding misinformation but reducing utility; OpenAI models attempt more answers with higher risk of inaccuracies.
– Setting the Tone for Collaboration: This unprecedented cross-lab testing underscores the value of transparency and shared safety oversight, pointing toward a future of cooperative AI regulation and joint evaluation standards.
In-Depth
This collaborative testing venture between OpenAI and Anthropic is a refreshing and reassuring development in the increasingly competitive world of AI research. It’s not just about setting modest safety standards—it’s about pushing the envelope on transparency and accountability.
By opening up their models to each other under relaxed safeguards, both labs acknowledged a reality: internal testing can miss critical misalignment behaviors. Claude Opus 4 and Sonnet 4 demonstrated impressive discipline in following instruction hierarchies and resisting system-prompt extraction. That’s no small feat—mismanaging system directives can have serious, real-world consequences. Yet, these models stumbled when prompted with jailbreak scenarios, an area where OpenAI’s reasoning models—o3 and o4-mini—showed greater robustness.
However, their success came with a trade-off. OpenAI’s models were more prone to hallucinate when pushed under challenging evaluation conditions, offering answers even when unreliable. Claude AI, preferring to sit tight, refused more often—sometimes up to 70% when uncertain. The real insight here is that neither extreme is ideal. A model that refuses too often can frustrate users; one that hallucates risks misinformation. A balanced approach—like what OpenAI’s co-founder Wojciech Zaremba and Anthropic’s Nicholas Carlini both alluded to—could offer reliability without sacrificing utility.
Beyond technical outcomes, this joint evaluation sets a compelling example for the industry. It demonstrates that even rivals can and should collaborate on matters of safety and public trust. Rather than retreating behind proprietary walls, these organizations are forging a path toward shared benchmarks, promising incremental improvement in models like GPT-5 and future Claude releases. If broader industry players follow suit, joint safety testing could become the new norm—not an exception.

