Anthropological researcher resigns in secret public letter

by
0 comments
Anthropological researcher resigns in secret public letter

An anthropological researcher announces his resignation, warning the world is “in danger” in a letter filled with mystery and poetry.

The employee, Mrinak Sharma, had led the cloud chatbot maker’s Safeguards research team since it was formed early last year and has been with the company since 2023. While at the company, Sharma said he explored the causes of AI sycophancy, developeded Defending against “AI-assisted bioterrorism”, and wrote “one of the first AI security cases.”

But on Monday he uploaded a post saying that it would be his last day at the company copy of a letter He shared with colleagues. it‘S Not particularly devoid of specifics, but indicating some internal tensions over the security of the technology.

“Throughout my time here, I have seen again and again how difficult it is to let your values ​​dictate your actions,” Sharma said. He claimed that employees face “constant pressure to put aside what matters most.”

He also issued a serious warning about the global situation.

He wrote, “I constantly find myself thinking about our situation, that the world is at risk. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding at this very moment.”

He added, “It appears that we are approaching a limit where our intelligence must grow equal to our ability to influence the world, otherwise we will not have to face the consequences.”

The resignation comes as Anthropic’s recently released cloud cowork model helped trigger a stock market meltdown over fears that its new plugins could overwhelm software customers and automate some white-collar jobs, especially in legal roles.

Amidst the selling, Wire informed Employees were privately troubled by their AI’s potential to hollow out the labor market.

“I feel like I’m coming to work every day to fire myself,” said one employee in an internal survey. “In the long run, I think AI will take over everything and make me and many others irrelevant,” said another.

High profile resignations, including over security issues, are not uncommon in the AI ​​industry. A former member of OpenAI’s now-defunct “SuperAlignment” team announced he was stepping down after feeling the company was “prioritizing new, shiny products” over user safety.

It’s also not uncommon for these resignations to be self-exonerating advertisements for the departing employee, or perhaps the new startup they’re launching or joining, where they vow to be more secure than ever before. Hint at enough issues, or leave enough loaded hints, and there’s a good chance your leaving it will generate some headlines.

Others leave quietly, such as former OpenAI economics researcher Tom Cunningham. Before leaving the company, he shared an internal message accusing OpenAI of turning his research team into a propaganda arm and discouraging publishing research criticizing the negative impacts of AI.

If Sharma’s resignation means anything here, it does not seem to be related to the industry in which he worked. He wrote, “I hope to seek a degree in poetry and devote myself to the practice of courageous speech.” In the footnotes, he cited a book Who advocates a new school of philosophy called “Cosmoerotic Humanism”. Its listed author, David J. Temple, is a collective pseudonym that has been used by a number of authors, including Mark Gaffney. Infamous New age spiritual guru who has been accused of sexually exploiting his followers.

More on AI: Humankind’s insiders fear they’ve crossed a line

Related Articles

Leave a Comment