NY Governor Seeks AI Safety Via RAISE Act Signing
Governor Kathy Hochul just made New York the second state in the nation to impose comprehensive AI safety regulations.
The RAISE Act became law after months of negotiations, creating strict new rules for the most powerful AI companies operating in the US.
Hochul signed the bill soon after President Trump issued an executive order designed to block exactly this type of state-level AI regulation. Hochul’s defiant signing shows states aren’t backing down from regulating AI, even under federal pressure.
“This law builds on California’s recently adopted framework, creating a unified benchmark among the country’s leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public,” Hochul said in the announcement.
The 72-hour rule
New York’s law targets AI models trained with computing power exceeding 10^26 floating-point operations—essentially the most advanced systems from companies like OpenAI, Google, and Meta. Companies must now publicly disclose their safety testing protocols and report critical safety incidents within 72 hours.
That 72-hour deadline is actually stricter than California’s 15-day timeline. But the real shock comes with enforcement: penalties reach up to $1 million for first violations and $3 million for subsequent violations.
The legislation also establishes a new regulatory office within the Department of Financial Services specifically to monitor AI development. Companies now face 24/7 oversight for high-risk applications capable of enabling cyberattacks or generating harmful content.
Federal pushback couldn’t stop state action
The federal-state clash had been brewing for weeks. Trump’s Dec. 11 executive order specifically seeks to create a “minimally burdensome, uniform national policy framework for AI” and establishes a task force to challenge state AI laws on constitutional grounds.
The order explicitly critiques California’s similar legislation and directs the Attorney General to challenge state AI laws. Yet Hochul proceeded anyway, suggesting states are prepared for an epic legal showdown over regulatory authority.
Trump’s order even weaponizes federal funding against defiant states. But New York’s bold move demonstrates states aren’t intimidated by financial pressure when it comes to protecting their residents from AI risks.
What changed from the original bill
Behind the scenes, a fierce lobbying battle reshaped the final legislation from its original form. Initial proposals from Hochul would have weakened the bill substantially, including reducing penalties from $10-30 million to just $1 million and extending incident reporting deadlines from 72 hours to 15 days.
The negotiations revealed intense pressure campaigns. Industry groups that held fundraisers for Hochul pushed hard for these changes. The final compromise maintained the crucial 72-hour reporting requirement while adjusting other provisions.
The law transforms safety “protocols” into “frameworks” that describe general approaches rather than specific compliance requirements. This strategic shift gives companies more flexibility while maintaining transparency obligations.
Battle lines are already drawn
Support has emerged from labor unions and civil rights groups who view the law as protection against job displacement and discrimination. Meanwhile, some developers express concern about administrative burdens from new reporting requirements.
Legal challenges appear inevitable. Experts anticipate pushback from tech companies who may argue the act oversteps into federal territory or infringes on trade secrets. Companies now face the challenge of monitoring AI systems 24/7 to catch incidents within the 72-hour deadline.
The law’s enforcement mechanism empowers the state Attorney General with civil penalties, creating a direct pathway for regulatory action against non-compliant companies. This positions New York alongside California in creating a new regulatory landscape that could reshape how the entire AI industry approaches safety and transparency.
OpenAI has vowed to put child safety first with the release of its latest AI model, GPT-5.2.
The post NY Governor Seeks AI Safety Via RAISE Act Signing appeared first on eWEEK.