Leaders in policy and tech call for balanced AI regulation
Law experts, industry leaders and an international envoy called for AI regulation that ensures both safety and innovation at a Monday panel held by the Stanford Cyber Policy Center. The discussion covered strategies for innovation and safety, along with implications like job displacement and deep fakes.
Moderated by technology journalist and d.school lecturer Jacob Ward, the panel’s discussions were dedicated to the Cyber Policy Center’s new report on generative AI governance entitled “Regulating Under Uncertainty.”
The conversation hosted several key voices across industry, government and academia including visiting professor of private law and the author of the report Florence G’Sell, Stanford Law professor Nathaniel Persily J.D. ’98, California State Senator Scott Wiener, Senior Envoy for Digital to the U.S. and Head of the EU Office in San Francisco Gerard de Graaf and Deputy General Counsel at Anthropic Janel Thamkul.
The 460-page report emerged from the rapid surge in AI development after the launch of OpenAI’s ChatGPT in November 2022 and subsequent demand from various entities to issue guidelines and regulations. The report tackles the challenge of regulating AI in an uncertain environment, where governments must tread carefully to avoid either stifling innovation or allowing preventable harms to occur. Yet, according to the report, the legal process continues to lag behind the rapid pace of technological change.
Wiener emphasized California’s pivotal role in AI regulation, given its position as a hub of tech innovation, highlighting a bill he spearheaded titled the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.” The bill, S.B. 1047, established a regulatory framework for AI.
Though recently vetoed by Governor Gavin Newsom, the bill represented a landmark effort to address potentially catastrophic risks of AI. Wiener emphasized that the goal was to “reduce risks without hindering innovation,” and vowed to continue leading efforts to regulate AI responsibly.
Wiener drew parallels between AI and social media, warning that insufficient regulation could lead to similar consequences, such as the studied mental health impacts on youth and the spread of harmful culture. He said that Congress has struggled to keep pace with tech developments, noting that apart from forcing ByteDance to divest from TikTok, it has not made significant strides in tech regulation in over 25 years.
G’Sell, who is also a visiting professor at Stanford Cyber Policy Center and Director of the Program on Governance of Emerging Technologies, reinforced the dangerous implications of AI. She shared the story of a 14-year-old Florida teenager who committed suicide after becoming obsessed with a chatbot on Character.AI. G’Sell said while some of the incident’s details remain unclear, this tragedy exemplifies the need for proactive regulation.
G’Sell highlighted her report’s analysis of the tension between three approaches to AI regulation — self-regulation by the private sector, government regulation and co-regulation — noting that private sector interests can sometimes overshadow risks. The slow-moving legal process adds to this challenge, as regulations are often outdated by the time they’re finalized.
Thamkul addressed the urgency of regulating AI risks. She highlighted the importance of transparency and public feedback to create safer AI systems. Thamkul, however, additionally stressed the need to focus both on insidious and immediate threats, while avoiding overly cautious measures that could hinder beneficial AI developments.
de Graaf brought a European perspective to the table, arguing that predictable regulation is beneficial for business, as it builds consumer trust. However, he noted that the U.S. faces a unique challenge due to an inherent regulatory fragmentation across states, making the regulation complex and potentially stifling innovation. In contrast, the EU’s standardized approach allows businesses to operate consistently across its 27 member states.
“You can serve 50 million users based on one rule,” de Graaf said.
Masha Ma, an AI governance practitioner in Silicon Valley and a Harvard Law graduate who previously worked in China, found the event enriching.
“It’s very useful for me to listen to different perspectives on how to regulate AI and also not only from the regulatory perspective but the industry perspective” Ma said.
“We are a proud democracy,” de Graaf said. “We will invite our institutions and those democratically elected to make decisions on how these laws will be changed.”
The post Leaders in policy and tech call for balanced AI regulation appeared first on The Stanford Daily.