Hochul Caves to Big Tech on AI Safety Bill
New York Gov. Kathy Hochul completely rewrote a bill passed by the state legislature intended to regulate artificial intelligence models to ensure public safety, substituting it with language favored by the same Big Tech interests that have held fundraisers for her in recent weeks.
The bill, known as the Responsible Artificial Intelligence Safety and Education (RAISE) Act, would in its original form have become the most expansive state-level regulation of AI for the testing and reporting of advanced “frontier” models. Co-authored by Assemblymember Alex Bores and Sen. Andrew Gounardes, the bill would put the onus on frontier model developers to create plans to make their models safer, proactively report “critical safety incidents,” and ban models deemed unsafe through testing from being released. It has been sitting on Hochul’s desk for months.
Stakeholders in New York have been described as apoplectic about Hochul’s changes, which weaken the bill in critical ways.
In Washington, Donald Trump is vowing to sign an executive order that would hinder states from passing their own AI regulations. This is after a bid to establish a moratorium on state AI rules failed to make it into the annual defense policy bill. Bores and Gounardes joined a coalition letter of state lawmakers urging Congress not to block state AI regulations.
But Hochul’s gutting of the RAISE Act may make efforts like Trump’s to preempt state AI rules obsolete. Big Tech is playing a federal-state game, lobbying for Trump administration support to create obstacles to state AI rules while fighting those rules in state capitols. In New York, these companies appear to have won one round, which critics believe could rebound across the country and make it harder for states to combat the Trump-assisted effort to insulate AI from scrutiny.
Gov. Hochul’s office did not respond to questions from the Prospect.
In New York, governors benefit from a form of a line-item veto. They can send proposed edits to legislation, known as a “redline,” to lawmakers for negotiation. Both sides would have to agree to the redlines before they can be signed into law.
According to sources who reviewed Hochul’s redline, the entirety of the bill was crossed out, with replacement text added that is substantially similar to a separate AI safety law passed in California. That law was weakened significantly, with input from OpenAI and other Big Tech firms and lobbyists, relative to its initial form.
The new bill turns the safety and security “protocol” that model developers must write, publish on their website, and comply with into a “framework” that describes the developer’s general approach. The RAISE Act requirement that models with an “unreasonable risk of critical harm” be prohibited from release is absent in Hochul’s changes, as are the specific requirements that the bill’s authors included to make models safer.
While the RAISE Act required any “safety incident” involving an AI model to be disclosed within 72 hours, the Hochul substitute contains a time limit of 15 days unless there’s an imminent risk of death, adding that any assessments of risk from models should be transmitted every three months. Penalties for violations of the statute were reduced from $10 million for a first violation and $30 million for subsequent violations to $1 million for any violation in the Hochul substitute.
The redline kicked off a ten-day process of negotiation between Hochul and state legislative authors about the bill. The redline was issued Tuesday, meaning that the deadline is set for December 19. Hochul has until December 31 to sign or veto the bill.
The changes tilt toward the demands industry groups have loudly made since the bill was introduced. Venture capitalist Ron Conway and industry association Tech:NYC have held recent fundraisers for Hochul. Conway has opposed the bill for months, and a Tech:NYC representative recently told The New York Times that her organization wanted the RAISE Act to more closely resemble the California AI safety law. The substitute Hochul proposal does exactly this.
Hochul has also benefited from a fundraiser from supporters of the RAISE Act. Sources have suggested to the Prospect that Hochul allowed the bill to linger on her desk for so long so she could extract as much as possible from both sides. But in the end, she appears to have sided with the tech interests.
The changes are seen as a major victory for AI interests that have spent large sums in lobbying to water down the bill. For example, American Innovators, a group aligned with the venture capital giant Andreessen Horowitz, has been sending New Yorkers text blasts urging them to oppose the RAISE Act.
The same interests are trying to ruin the career of one of the bill sponsors. Bores is running for Congress to replace Rep. Jerry Nadler in Manhattan, and tech adversaries have vowed to personally destroy his candidacy. Think Big, one of an array of super PACs under the umbrella Leading the Future that has promised to spend $100 million in the midterm elections, released the first attack ad against Bores this week, hitting him for authoring a bill that “hands AI to state regulators” and will “crush innovation, cost New York jobs, and fail to keep people safe.”
Conway, one of the recent donors to Hochul’s campaign, is backing the Leading the Future super PAC play.
Bores has laughed off the attacks, amid bipartisan anger about what AI development means for jobs, energy, and inequality.
The post Hochul Caves to Big Tech on AI Safety Bill appeared first on The American Prospect.