AI safety bills await Hochul's signature
ALBANY, N.Y. (NEXSTAR) — In the home stretch of New York's 2025 legislative session, legislators are passing bills meant to control the growing power of artificial intelligence. They include new requirements for large AI companies and stricter regulations for how state agencies can use the rapidly evolving technology.
The State Senate approved the Responsible AI Safety and Education Act on Thursday, and the Assembly passed its version on Friday. S6953B/A6453B cleared both houses of the legislature with "overwhelming bipartisan support," according to a press release from State Sen. Andrew Gounardes and Assemblymember Alex Bores, who sponsored the bill.
Another bill, S7599C/A8295D, sponsored by State Senator Kristen Gonzalez, also passed in the Senate on Thursday before making its way through the Assembly on Monday. This bill is supposed to make state agencies regularly review automated decision-making software that relies on AI without human involvement.
The RAISE Act
The RAISE Act is supposed to require larger AI companies—Meta, OpenAI, Deepseek, and Google—to write, publish, and follow safety and security plans. It covers both "frontier models," the most powerful AIs that cost over $100 million in computing power, and smaller AI systems that cost as little as $5 million but are based on frontier models and have similar abilities. But universities using AI on academic research would be exempt.
Such plans would have to describe how the company plans to avoid their AIs causing "critical harm," namely death or serious injury to 100 or more people or at least $1 billion in property damage. Such potential for damage could include tutelage on chemical, biological, radiological, or nuclear weapons, or an independent AI committing a crime that requires human intent, recklessness, or gross negligence.
Companies wouldn't be allowed to release AIs that could reasonably lead to critical harm. They'd also have to report to the New York State Attorney General and the Division of Homeland Security and Emergency Services about any serious incidents—like if an AI acts dangerously or a dangerous model gets stolen—within 72 hours of identifying them.
The bill wouldn't let individuals sue AI companies directly for breaking the law. But the attorney general could take civil action against them, with penalties of up to $10 million for a first violation and up to $30 million subsequently. The bill also carves out specific protections for whistleblowers.
"Would we let automakers sell a car with no brakes?" asked Gounardes. "Why would we let developers release incredibly powerful AI tools without basic safeguards in place?" He said that the RAISE Act would let the AI industry grow while holding its major movers and shakers accountable for the effects of their products.
And according to Bores, "New York is poised to be the first government in the United States to do what Americans have been screaming for: require basic guardrails for AI safety." He said the bill makes sure that developers don't break their promises to keep their users safe. Bores also said that 84% of New Yorkers support the bill.
Automated government decision-making
In December, Hochul signed the LOADinG Act, or the Legislative Oversight of Automated Decision-making in Government Act. The more recent bill from Gonzalez, who chairs the Senate Committee on Internet and Technology, expands on prior protections.
Under the bill, state agencies would have to report on their use of AI decision-making systems to the governor and the legislature. It could address the concerns raised by State Comptroller Thomas DiNapoli in April. He argued that state agencies lack guidance for the AIs they're using, which can worsen existing biases or affect civil rights. For example, AI monitors prisoners' phone calls and detects driver’s license fraud.
Both bills that passed this session are now en route to Governor Kathy Hochul, awaiting her signature. The measures work on the same timetable and would take effect 90 days after being signed into law.
Further regulations
Concerns about AI's potential dangers have been widespread. According to Thomas Woodside, co-founder and senior policy advisor at the Secure AI Project, measures that regulate AI represent positive progress. "This is good for AI, good for public safety, and good for New Yorkers," he said in a statement.
Other AI-related bills cleared the Senate on Friday while awaiting Assembly approval before session ends, which is currently scheduled for Tuesday. Because Assembly processes can move slower with more representatives, more legislation could still proceed to Hochul's desk.
For example, Gonzalez also sponsored S934A/A3411B, which would require generative AIs to notify their users that responses or outputs from the system could be wrong. It would also include a penalty of up to $1,000 for each violation. It's sponsored in the Assembly by Clyde Vanel.
Gonzalez also sponsored S1169A/A8884, the New York AI Act, a consumer protection bill to stop algorithmic discrimination by regulating how AI systems get developed and used. It would require independent audits of high-risk generative AIs that relate to New Yorkers' rights, safety, or well-being. Sponsored in the Assembly by Michaelle Solages, it would also give the attorney general enforcement powers and let people sue.
Gonzales and Solages also proposed the privacy-focused Secure Our Data Act, S1961/A5739, which cleared the Senate back in May. It would require state entities to enact better protections and recovery methods for personal data.
Another related bill sponsored by Gounardes and Bores has not advanced in either chamber. S6954A/A6540C would make providers include data about the original source of any content created or changed by their generative AI system. It could address concerns about misinformation and disinformation in an August report on AI from the office of Attorney General Letitia James. That report also identified concerns about AI's potential for bias and how it will affect privacy and job displacement.
And another related bill sponsored by Gonzalez and Vanel, S5668/A222A, also failed to advance. The Chatbot Liability Bill would hold companies responsible for misinformation communicated by automated robots.