A group of tech companies and academic institutions spent thousands of dollars last month on an ad campaign against New York’s landmark AI safety bill — likely between $17,000 and $25,000, which could reach more than two million people, according to Meta’s ad library.
The landmark bill is called the RAISE Act, or Responsible AI Safety and Education Act, and a few days ago, a version of it was signed by New York Governor Kathy Hochul. a close eye on the law fixed AI companies developing large models – OpenAI, Anthropic, Meta, Google, DeepSeek, etc. – should outline security plans and transparency rules for reporting large-scale security incidents to the Attorney General. But the version Hochul signed — different from the version that passed both the New York State Senate and Assembly in June — had a rewrite that made it much more favorable To technology companies. A group of more than 150 parents had sent a letter to the governor urging him to sign the bill without any changes. And a group of tech companies and academic institutions, called the AI Alliance, were part of the charge to screw it up.
The AI Alliance – the organization behind the opposition ad campaign – counts Meta, IBM, Intel, Oracle, Snowflake, Uber, AMD, Databricks and Hugging Face among its members, which isn’t necessarily surprising. The group sent a letter to New York lawmakers in June about its “deep concerns” about the bill and deemed it “impractical.” But this group is not made up just of technology companies. Its members include many colleges and universities from around the world, including New York University, Cornell University, Dartmouth College, Carnegie Mellon University, Northeastern University, Louisiana State University, and the University of Notre Dame, as well as Penn Engineering and Yale University.
Advertisement Started on 23rd November and ran with the headline, “The RAISE Act would stunt job growth.” He said the legislation would “slow down the New York technology ecosystem that powers 400,000 high-tech jobs and major investments. Instead of stifling innovation, let’s support a future where AI development is open, trusted, and strengthens the Empire State.”
When? The Verge When the educational institutions listed above were asked whether they were aware that they were unknowingly part of an ad campaign against the widely discussed AI safety law, none responded to a request for comment except Northeastern, which did not provide comment by publication time. In recent years, OpenAI and its competitors have increasingly been attracting academic institutions to become part of research association or technology offering directly to students To free.
Many academic institutions that are part of the AI Alliance are not directly involved in one-on-one partnerships with AI companies, but some are. For example, Northeastern’s partnership with Anthropic this year translated to cloud access for 50,000 students, faculty and staff across 13 global campuses, according to Anthropic’s announcement. in aprilIn 2023, OpenAI funded A Journalism Ethics Initiative at NYU. Dartmouth announces partnership with Anthropic earlier this monthCarnegie Mellon University Professor currently works OpenAI’s on board, and Anthropic has funded program At Carnegie Mellon.
The initial version of the RAISE Act stated that developers should not release frontier models “if doing so would create an unreasonable risk of serious harm”, which the bill defines as the death or serious injury of 100 people or more, or $1 billion or more in loss of money or property rights arising from the creation of a chemical, biological, radiological, or nuclear weapon. This definition also extends to AI models that “act without any meaningful human intervention” and “if it is performed by a human,” fall under certain crimes. Version signed by Hochul removed this sectionHochul also extended deadlines for disclosure of security incidents and reduced fines, among other changes,
The AI Alliance has previously lobbied against AI safety policies, including the RAISE Act. California’s SB 1047And President Biden’s AI Executive Orderit states usa Its mission is to “bring together builders and experts from diverse sectors to collaboratively and transparently address the challenges of generic AI and democratize its benefits”, particularly through “member-driven working groups”. Some of the group’s projects beyond lobbying include cataloging and managing “trustworthy” datasets and creating a ranked list of AI security priorities.
The AI Alliance was not the only organization to oppose the RAISE Act with advertising dollars. As The Verge Recently, Leading the Future, a pro-AI super PAC backed by Perplexity AI, Andreessen Horowitz (A16Z), Palantir co-founder Joe Lonsdale, and OpenAI president Greg Brockman, has spent money on ads targeting RAISE Act co-sponsor, New York State Assemblyman Alex Borsch. But Leading the Future is a super PAC with a clear agenda, while the AI Alliance is a nonprofit that has partnered with a trade association — with Objective “To develop AI collaboratively, transparently and with a focus on safety, ethics and the broader good.”
