Illustration by Tag Hartman-Simkins/Futurism. Source: Chip Somodevilla/Getty Images
OpenAI has long published research on the potential security and economic impact of its technology.
Now, wired reports The Sam Altman-led company is being more “guarded” about publishing research that paints an inconvenient truth: that AI could be bad for the economy.
The alleged censorship has become such a point of frustration that at least two OpenAI employees working in its economic research team have left the company, according to four wired Source.
One of these employees was economics researcher Tom Cunningham. In his final farewell message shared internally, he wrote that the economic research team was moving away from doing actual research and instead acting like the propaganda arm of his employer.
Shortly after Cunningham’s departure, OpenAI Chief Strategy Officer Jason Kwon sent a memo saying the company should be “creating solutions,” not just publishing research on “hard topics.”
“My stance on difficult topics is not that we shouldn’t talk about them,” Kwon wrote on Slack. “Rather, because we are not just a research institution, but also an actor (indeed the leading actor) in the world that puts (AI) the subject of investigation into the world, we are expected to take agency for the outcomes.”
The reported censorship, or at least hostility toward putting forward work that portrays AI in an unflattering light, is emblematic of OpenAI’s move away from its nonprofit and explicitly philanthropic roots as it turns into a global economic juggernaut.
When OpenAI was founded in 2016, it promoted open-source AI and research. Today its models are closed-sourced, and the company has reorganized itself into a for-profit, public benefit corporation. Exactly when is unclear, but reports also suggest that the private entity plans to go public at a valuation of $1 trillion, which is projected to be one of the largest initial public offerings ever.
Although its non-profit arm is nominally in control, OpenAI has garnered billions of dollars in investment, signing deals that could bring in hundreds of billions more, as well as contracts to spend such huge amounts of money. OpenAI gets AI chipmaker Agreed to invest up to 100 billion dollars in it at one end, and says that it will Pay Microsoft up to $250 billion On the other hand for its Azure cloud services.
With that kind of money hanging in the balance, there are billions of reasons why it wouldn’t want to release findings that would shake the public’s already shaky confidence in its technology – because many fear its potential to destroy or replace jobs, not to mention an AI bubble or the existential risks the technology poses to the human race.
OpenAI’s economic research is currently overseen by Aaron Chatterjee wiredChatterjee led a report released in September showing how people around the world used ChatGPT, presenting it as evidence of how it created economic value by increasing productivity. If this sounds suspiciously glaring, an economist who previously worked with OpenAI and chose to remain anonymous alleged wired It was increasingly publishing work that glorified its technology.
Cunningham is not the only employee who has left the company due to ethical concerns about its direction. William Saunders, a former member of OpenAI’s now-defunct “SuperAlignment” team, said he left after feeling that it was prioritizing “making new, shiny products” rather than user safety. after departure last yearFormer security researcher Steven Adler has repeatedly criticized OpenAI for its risky approach to AI development, and highlighted how ChatGPT is leading its users to mental crises and confusion. wired noted that Miles Brundage, OpenAI’s former head of policy research, complained after leaving last year that it had become “harder” to publish research “on all the topics that are important to me”.
More on OpenAI: Sam Altman says child care is now impossible without ChatGPAT
