Mrinank Sharma
x

Mrinank Sharma has said he hopes to explore a degree in poetry, deepen his work in facilitation and community-building, and step back from public visibility for a time as he returns to the UK. Photo: X | @MrinankSharma

Who is Mrinank Sharma? Why did he quit Anthropic?

A senior AI safety researcher who led the Safeguards Research Team, Sharma says ‘the world is in peril not just from AI or bioweapons but from a whole series of interconnected crises unfolding in this very moment’


Click the Play button to hear this message in audio format

Mrinank Sharma, a senior AI safety researcher who led the Safeguards Research Team at Anthropic, has resigned from one of the world’s most closely-watched artificial intelligence companies, citing deep concerns about the direction of the world, the limits of corporate values, and his own calling beyond technical work.

Anthropic, headquartered in San Francisco and best known for its Claude AI models, has emerged as a major force in shaping debates around safe and responsible artificial intelligence. Its CEO, Dario Amodei, has positioned the company as a standard-bearer for aligning powerful AI systems with human values.

Yet Sharma’s sudden departure — announced through a reflective and literary note on social media — has raised questions about the gap between AI safety ideals and day-to-day realities inside leading tech firms.

Sharma, who completed a D.Phil in Machine Learning at the University of Oxford and holds a Master of Engineering in Machine Learning from the University of Cambridge, did not point to a single trigger for his exit. Instead, his statements suggest a broader reckoning.

‘World is in peril’

Mrinank shared a post on X, in which he stated that he had resigned from Anthropic.

“I’ve decided to leave Anthropic. I’ve achieved what I wanted to here. I arrived in San Francisco two years ago, having wrapped up my PhD and wanting to contribute to AI safety. I feel lucky to have been able to contribute to what I have here: understanding AI sycophancy and its causes; developing defences to reduce risks from AI-assisted bioterrorism; actually putting those defences into production; and writing one of the first AI safety cases,” he wrote.

Also Read: Nifty IT index crashes as Anthropic AI tool sparks automation fears

He then hinted at the reason behind his resignation.

“The world is in peril,” he wrote, “not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.”

He warned that humanity is approaching a threshold where its wisdom must grow as fast as its capacity to reshape the world — or risk grave consequences.

'Disconnect between values and actions'

A second theme in Sharma’s resignation note was a perceived disconnect between values articulated publicly and pressures experienced internally. Reflecting on his time at Anthropic, he wrote that he had seen “how hard it is to truly let our values govern our actions”, both within himself and within institutions shaped by competition, speed, and scale.

Also Read: Zoho’s Sridhar Vembu warns coders to seek alternative careers amid AI boom

Despite expressing pride in his final research project — focused on how AI assistants might erode or distort human qualities — Sharma said he no longer felt called to incremental technical fixes, such as making systems less “sycophantic”. Instead, he plans to turn towards writing, poetry, and what he calls “courageous speech”, placing poetic truth alongside scientific truth as equally vital ways of understanding the moment humanity is in.

Future plans

Sharma has said he hopes to explore a degree in poetry, deepen his work in facilitation and community-building, and step back from public visibility for a time as he returns to the UK.

He closed his farewell with William Stafford’s poem The Way It Is, invoking an unchanging moral thread amid upheaval.

Also Read: Amid rise of AI in job market, 84 pc of Indian professionals feel unprepared: LinkedIn report

His departure echoes earlier exits by AI ethicists, including Timnit Gebru’s high-profile split from Google in 2020, underscoring persistent tensions between technological ambition and moral restraint in the AI industry.

Next Story