At five years old, Institute for Human-Centered AI looks to the future
This year, the Stanford Institute for Human-Centered Artificial Intelligence (HAI) celebrated the fifth year since its founding. Looking forward, the institute aims to continue fostering a better understanding of artificial intelligence’s (AI) impacts on society.
In HAI’s five years of operation, the institute has focused on policy, research and education and has funded the research projects of more than 400 faculty, said HAI co-founder and co-director James Landay, a computer science professor.
Landay highlighted HAI’s success in bringing congressional staff from across the political spectrum to Stanford to learn about AI. The institute’s policy successes include the passage of a congressional bill to create a national AI research resource that would enable researchers in nonprofits and academia to work on large AI problems. The institute was a driving force behind the bill, he said.
Landay, who specializes in human-computer interaction, has been a leader in HAI’s research initiatives and has helped define and differentiate human-centered AI from traditional AI. His research has ranged from wearable human-computer interfaces to design thinking.
HAI has encouraged researchers to contemplate the societal impacts of new work in AI, Landay said.
“We’ve moved the conversation to thinking about the broader community of people who might be impacted by AI systems, even if they’re not the direct user,” he said.
Psychology and computer science professor and HAI affiliate Daniel Yamins agreed, adding HAI has been successful at “raising awareness of interesting questions around AI and human-centric issues” involving ethics, safety and cognition.
Beyond research, HAI has expanded its educational programs to instill human-centered values in both students and AI-related industry affiliates. At Stanford, the institute has expanded programming focusing on human-centered AI for students, for example, by offering an HAI-focused track for the symbolic systems major.
Patrick Ye ’26, a computer science student on the AI track conducting research at HAI, said professors at the institute “share a really great light of inspiration for how young researchers could contribute their own efforts into a line of research.”
As for HAI’s efforts beyond Stanford, Landay highlighted one of the institute’s industry programs that aims to encourage its industry affiliates to consider the ethical impacts of their work.
“That’s another way to have an impact on the world,” Landay said. “To have these companies learn about our point of view, while they also help fund research at Stanford.”
An area of future work for the institute is transparency of data in large AI systems, Landay said. But, he added, this is something companies likely do not want to do since they view data as proprietary information or as protection from legal liability for “having used data illegally against people’s ownership or copyright,” he said.
Landay is also co-leading a partnership established last week between HAI and the Stanford Robotics Center, aimed at ensuring AI-driven robotic systems are deployed in a safe manner.
In its mission, HAI emphasizes an interdisciplinary approach to AI that bridges each of Stanford’s seven schools. Leaders of the institute hope to continue the commitment moving forward.
“All voices need to be involved in the conversation, shaping a technology that is going to reshape all of society. It can’t just be technologists who decide that,” Landay said. “I think we were the right place to do this, and it’s really exciting to continue that.”
The post At five years old, Institute for Human-Centered AI looks to the future appeared first on The Stanford Daily.