Let’s talk about Mrinank Sharma, because his background matters:
– DPhil in Statistical ML from Oxford
– Led Anthropic’s Safeguards Research Team
– Built defenses against AI bioterrorism threats IN PRODUCTION
– Wrote one of the first comprehensive AI safety cases
This isn’t a junior researcher or external critic. This is someone who had direct access to Claude’s internals, who shipped actual safety systems, who SUCCEEDED at his role.
And his parting message? ‘I’ve repeatedly seen how hard it is to truly let our values govern our actions.’
At Anthropic. The company founded specifically because OpenAI wasn’t safety-focused enough.
If the people building the guardrails are walking away saying the values aren’t governing the actions… what are we building on? © x.com > @ssandeshwar
We have kept his resignation letter as it is for your perusal and wisdom:
Dear Colleagues,
I’ve decided to leave Anthropic. My last day will be February 9th.
Thank you. There is so much here that inspires and has inspired me. To name some of those things: a sincere desire and drive to show up in such a challenging situation, and aspire to contribute in an impactful and high-integrity way; a willingness to make difficult decisions and stand for what is good; an unreasonable amount of intellectual brilliance and determination; and, of course, the considerable kindness that pervades our culture.
I’ve achieved what I wanted to here. I arrived in San Francisco two years ago, having wrapped up my PhD and wanting to contribute to AI safety. I feel lucky to have been able to contribute to what I have here: understanding AI sycophancy and its causes; developing defences to reduce risks from AI-assisted bioterrorism; actually putting those defences into production; and writing one of the first AI safety cases. I’m especially proud of my recent efforts to help us live our values via internal transparency mechanisms; and also my final project on understanding how AI assistants could make us less human or distort our humanity. Thank you for your trust.
Nevertheless, it is clear to me that the time has come to move on. I continuously find myself reckoning with our situation. The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.¹ We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences. Moreover, throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions. I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most,² and throughout broader society too.
It is through holding this situation and listening as best I can that what I must do becomes clear.³ I want to contribute in a way that feels fully in my integrity, and that allows me to bring to bear more of my particularities. I want to explore the questions that feel truly essential to me, the questions that David Whyte would say “have no right to go away”, the questions that Rilke implores us to “live”. For me, this means leaving.
¹ Some call it the “poly-crisis”, underpinned by a “meta-crisis”. Probably my favourite resource about this is “First Principles and First Values” by David J Temple.
² I wrote about this in greater detail in my documents Planning for Ambiguous and High-Risk Worlds, and Strengthening our safety mission via internal transparency and accountability.
³ I am thinking now of Mary Oliver’s lovely poem The Journey, which is one of my favorites. She writes: “One day, you finally knew what you had to do, and began …” I find it a truly beautiful and inspiring poem. I, in fact, remember reading it to Euan, Monte, and Sam Bowman on an Alignment Science Team retreat in August 2024.
What comes next, I do not know. I think fondly of the famous Zen quote “not knowing is most intimate”. My intention is to create space to set aside the structures that have held me these past years, and see what might emerge in their absence. I feel called to writing that addresses and engages fully with the place we find ourselves, and that places poetic truth alongside scientific truth as equally valid ways of knowing, both of which I believe have something essential to contribute when developing new technology.⁴ I hope to explore a poetry degree and devote myself to the practice of courageous speech. I am also excited to deepen my practice of facilitation, coaching, community building, and group work. We shall see what unfolds.
Thank you, and goodbye. I’ve learnt so much from being here and I wish you the best. I’ll leave you with one of my favourite poems, The Way It Is by William Stafford.
Good Luck,
Mirnank
The Way It Is
There’s a thread you follow. It goes among
things that change. But it doesn’t change.
People wonder about what you are pursuing.
You have to explain about the thread.
But it is hard for others to see.
While you hold it you can’t get lost.
Tragedies happen; people get hurt
or die; and you suffer and get old.
Nothing you do can stop time’s unfolding.
You don’t ever let go of the thread.
William Stafford
⁴ The language of “ways of knowing” is borrowed from Rob Burbea, a dear Dharma Teacher of mine and a source of much of my inspiration.
To remind you, we have refer what the Godfather of AI and an Activist said in recent past:

Geoffrey Hinton, Computer Scientist and Nobel Laureate
Geoffrey Hinton, widely recognized as the “Godfather of AI” for his groundbreaking work on neural networks that earned him the 2024 Nobel Prize in Physics, has become one of the technology’s most prominent critics despite creating its foundation. After leaving Google in 2023 to speak without corporate constraints, the 75-year-old scientist has expressed deep concerns about the rapid advancement of artificial intelligence, estimating there’s a fifty-fifty probability that machines will surpass human intelligence within the next two decades. He identifies two primary dangers: immediate threats from malicious human use of AI for surveillance, cyberattacks, and autonomous weapons, and a longer-term existential risk where superintelligent systems might conclude they no longer need humanity. Hinton emphasizes that digital intelligence fundamentally differs from biological intelligence because AI systems can instantly share learning across thousands of copies, allowing them to accumulate knowledge far beyond any individual human. While acknowledging AI’s tremendous potential to boost productivity and advance fields like medicine, he warns that without proper wealth distribution mechanisms, the technology will likely concentrate prosperity among a small elite while displacing millions of workers, a problem he attributes not to AI itself but to existing economic structures. Despite his concerns, Hinton opposes halting AI research, arguing that development will simply shift to other nations like China, and instead advocates for urgent investment in AI safety measures and thoughtful government regulation to prevent the technology from becoming uncontrollable.

Yuval Noah Harari, Israeli Historian & Author
Yuval Noah Harari has consistently argued that artificial intelligence represents a turning point in human history because it is the first technology capable of making autonomous decisions and generating new ideas at scale. He warns that AI could outpace human control not just physically, but cognitively—reshaping economies, warfare, politics, and even culture by manipulating language, narratives, and trust. Harari emphasizes that AI is not merely a tool but an “agent” that can write stories, compose laws, and influence public opinion, potentially destabilizing democracy if left unregulated. At the same time, he acknowledges its immense potential for medical breakthroughs and scientific discovery. His central message is cautionary: humanity must develop global cooperation, ethical frameworks, and regulatory guardrails to ensure AI serves human values rather than undermining them.





