Parents call on New York governor to sign historic AI safety bill

by
0 comments
Parents call on New York governor to sign historic AI safety bill

A group of more than 150 parents sent a letter to New York Governor Kathy Hochul on Friday, urging her to sign the Responsible AI Safety and Education (RAISE) Act without any changes. The RAISE Act is a controversial bill which will be required Developers of large AI models such as Meta, OpenAI, DeepSeek, and Google create security plans and follow transparency rules regarding reporting security incidents.

The bill passed both the New York State Senate and Assembly in June. But this week, Hochul Allegedly Proposed an almost complete rewrite of the RAISE Act that would make it more favorable to tech companies, similar to some of the changes made to California’s SB 53 after large AI companies pressured it.

Not surprisingly, many AI companies are completely against this law. AI Alliance, What Matters
Meta, IBM, Intel, Oracle, Snowflake, Uber, AMD, Databricks and Hugging Face among their members sent a Letter In June New York lawmakers detailed their “deep concerns” about the RAISE Act, calling it “impractical”. and Leading the Future, a pro-AI super PAC backed by Perplexity AI, Andreessen Horowitz (A16Z), OpenAI Chairman Greg Brockman, and Palantir co-founder Joe Lonsdale. has been targeting New York State Assemblyman Alex Bors, who co-sponsored the RAISE Act with recent endorsements.

Two organizations, ParentsTogether Action and the Tech Oversight Project, came together Friday’s letter to HochulWhich states that some signatories have “lost their children due to the harms of AI chatbots and social media.” The signers called the RAISE Act the “minimum guardrail” that should be legislated.

He also highlighted that the bill, passed by the New York State Legislature, “does not regulate all AI developers – only the largest companies, which spend hundreds of millions of dollars per year.” They would be required to disclose large-scale security incidents to the Attorney General and publish security plans. Developers would also be prohibited from releasing Frontier models “if doing so would create an unreasonable risk of serious harm”, defined as the death or serious injury of 100 people or more, or $1 billion or more in loss of money or property rights arising from the creation of a chemical, biological, radiological or nuclear weapon; Or an AI model that “acts without any meaningful human intervention” and “if it were performed by a human,” would fall under certain crimes.

“Big Tech’s deep opposition to these basic protections sounds familiar because we have
“This pattern of avoidance and evasiveness has been seen before,” the letter said. “Widespread harm to young people –
This includes their mental health, emotional stability, and ability to function in school.
This has been widely documented since the largest technology companies decided to pursue the algorithm
Social media platforms without transparency, oversight or accountability.”

Related Articles

Leave a Comment