How Congress dropped the ball on AI safety
The House Bipartisan Artificial Intelligence Task Force released its long-anticipated AI policy report in the last days of this Congress. The report offers thoughtful and comprehensive analysis, but it falls short of addressing the most crucial challenge of our time: preventing catastrophic risks from advanced artificial intelligence.
As someone who leads an organization dedicated to mitigating AI risks through policy development, I appreciate the careful work that went into this report.
Reps. Jay Obernolte (R-Calif.) and Ted Lieu (D-Calif.) deserve credit for shepherding this bipartisan effort, which makes several valuable recommendations. The report shrewdly discusses the need for responsible AI deployment, transparency requirements, robust data security and meaningful human oversight.
But after two years of congressional deliberation on artificial intelligence, we need more than careful analysis — we need decisive action. AI development is accelerating rapidly, with new and more powerful systems deployed every few months. Without new guardrails, these AI systems pose extreme risks to humanity's future.
As “AI godfather” Yoshua Bengio explained at the Aspen Strategy Group this October, a sufficiently advanced AI would most likely try to take over the world economy or even “eliminate humans altogether” in the interest of its own self-preservation.
Last month, former Google CEO Eric Schmidt warned that when a computer system reaches a point where it can self-improve, "we seriously need to think about unplugging it."
Schmidt and Bengio are not alone. The average result from a recent survey of 2,778 machine learning experts estimated a 16 percent chance that superintelligent AI would completely disempower humanity when it arrives. The same survey found a 10 percent chance that superintelligence will be invented as early as 2027 and a 50 percent chance that superintelligence will arrive by 2047.
American voters agree. In a poll conducted by the AI Policy Institute in November, 62 percent of respondents said they preferred Elon Musk’s approach of “focusing on catastrophic AI risks while building up infrastructure,” compared to only 11 percent who favored Mark Andreessen’s approach of “having as little AI regulation as possible in all areas.”
Unfortunately, the House Task Force’s report does not reflect the American voters' consensus.
In 273 pages, the report uses “catastrophic” only once in a footnote. Its discussion of chemical, biological, radiological and nuclear risks assumes that AI will never become sophisticated enough to help malicious actors develop weapons of mass destruction. It does nothing to address the possibility of advanced AI systems escaping human control.
These aren't science fiction scenarios to be dismissed. They are risks supported by the best available science, and they demand serious policy responses. That's why the Center for AI Policy, where I am executive director, advocates for strong and effective AI safety legislation to ensure the safe development of frontier AI capabilities.
To be clear, effective regulation need not stifle innovation. The opposite is true — thoughtful, risk-based governance approaches can promote sustainable technological progress while protecting against catastrophic outcomes. The report's authors understand this principle in theory, noting that "a thoughtful, risk-based approach to AI governance can promote innovation rather than stifle it."
Here’s what it might look like for the next Congress to put that principle into practice:
First, fully fund the National Institute of Standards and Technology and give them the resources to operationalize AI safety standards. NIST's technical expertise will be crucial in creating measurable benchmarks for AI system safety and reliability.
Second, the legislative process should begin to establish a dedicated federal regulator for frontier AI, which should have clear authority to require advanced AI systems to be trained and deployed in ways that avoid catastrophic risks.
Just as the Federal Aviation Administration makes sure our airplanes do not fall out of the sky and the Federal Deposit Insurance Corporation keeps our banks from failing, we need an agency with the authority and expertise to protect us against the worst harms from advanced AI.
Third, whistleblowers inside AI companies should be given clear and comprehensive legal protections that guarantee their right to warn the government about critical safety issues.
The House AI Task Force report demonstrates a significant step forward in Congress’s ability to understand and engage with artificial intelligence, but reports alone won't protect us from the risks posed by increasingly powerful AI systems.
The window for establishing effective AI governance is closing rapidly. Every month without concrete action increases the risk of losing control of this transformative technology. Congress has spent two years studying the challenges posed by AI. Now it's time to act.
Jason Green-Lowe is executive director of the Center for AI Policy.