An AI safety researcher at Anthropic named Mrinank Sharma publicly resigned, posting a letter on X (formerly Twitter) that warned the “world is in peril” from a broad set of global crises — not just artificial intelligence — and suggested that organizations, including those in the AI field, struggle to align their actions with their stated safety values. In his note, Sharma said he had led a team focused on AI safeguards, including research into AI sycophancy and defenses against AI-assisted bioterrorism, but cited internal pressures that he felt undermined core principles; he also indicated plans to pursue other interests such as writing and poetry. This resignation comes amid a wider wave of departures by researchers at major AI companies, including an OpenAI team member who has publicly warned about ethical and strategic directions within the industry, reflecting broader debates over AI’s pace, purpose, and governance. Sources differ in details but consistently frame Sharma’s message as a caution about unchecked technological and societal trends.
Sources
https://www.theepochtimes.com/tech/ai-safety-researcher-resigns-with-world-is-in-peril-warning-5984908
https://www.forbes.com/sites/conormurray/2026/02/09/anthropic-ai-safety-researcher-warns-of-world-is-in-peril-in-resignation/
https://www.semafor.com/article/02/11/2026/anthropic-safety-researcher-quits-warning-world-is-in-peril
Key Takeaways
• A senior AI safety figure at Anthropic resigned with a public warning that “the world is in peril,” citing concerns that extend beyond AI itself.
• Resignations by AI safety researchers are part of a broader trend of industry insiders expressing unease over ethical and strategic directions at leading AI labs.
• Sharma’s letter framed his departure as rooted in tensions between professed safety values and real-world pressures within organizations, and he signaled a desire to shift focus to other pursuits.
In-Depth
Mrinank Sharma’s resignation from his role leading safeguards research at Anthropic has become a flashpoint in ongoing debates about artificial intelligence’s role in society and the responsibilities of the companies developing it. In a letter posted on X, Sharma cautioned that “the world is in peril” from a constellation of risks that include — but are not limited to — AI systems themselves. His words resonated not because they offered a granular critique of specific policies or projects at Anthropic, but because they underscored an internal tension within the industry: the challenge of balancing pioneering technological work with a genuine commitment to safety and ethical responsibility.
Sharma’s team at Anthropic was tasked with exploring ways to mitigate the dangers associated with advanced AI, such as “AI sycophancy,” the tendency of AI systems to reinforce user biases or flatter interlocutors, and developing techniques to detect and counter potential misuse, including safeguards against AI-assisted bioterrorism. Yet in his departure statement, Sharma suggested that despite public commitments to safety, researchers often face implicit pressures to deprioritize these concerns in the face of competitive and commercial realities. His letter did not enumerate specific incidents or decisions within the company that precipitated his choice to leave, but his broader message implied that such pressures are systemic and not unique to a single organization.
The timing of Sharma’s resignation aligns with a broader pattern of departures and public warnings out of major AI labs. Other researchers — including figures from OpenAI — have recently voiced concerns about the direction of their work, the management of ethical risks, and the alignment of corporate actions with safety principles. This broader context highlights a growing unease among AI specialists that rapid advancements, combined with market and competitive incentives, may be outstripping the field’s ability to responsibly manage both immediate harms and long-term existential risks.
Sharma’s choice to frame his warning in almost poetic language — and his stated intention to pursue other creative endeavors — has sparked discussion about how best to communicate ethical unease without undermining credibility. Some observers interpret his rhetoric as overly broad or lacking specific evidence, while others see it as a sincere alarm about a tech landscape in flux. In conservative-leaning circles, the episode has been framed as evidence that even those deeply embedded in AI development recognize the risks of unregulated or inadequately guided innovation, reinforcing calls for thoughtful oversight and accountability in the deployment of powerful technologies.
Ultimately, the resignation speaks to deeper questions facing the AI industry and society at large: how do we ensure that the development of transformative technologies is aligned with human values, and how do we address criticism from within when it arises? Sharma’s exit and his warning have added another dimension to these debates, suggesting that the conversation about AI safety will continue to intensify as the technology evolves and reshapes economic and social structures.

