More people might misunderstand AI than you think. Here’s why.
Chris Gregg, associate professor of computer science, argued that most people probably aren’t informed about how artificial intelligence (AI) works and its societal implications in an interview with The Daily.
As AI continues to shape industries and influence decision-making, gaps in AI literacy have become more relevant to the way users interact with the rapidly-advancing technology. Students and professors alike warn that there are already major disparities in AI use and literacy between demographic groups.
Gregg said much engagement with AI remains surface-level, despite a high and growing number of encounters with the topic through articles, podcasts and news stories.
“I don’t know that [the general public] necessarily would have the tools to go find a lot of the information at the nuanced level,” he said.
Benjamin Xie, a Graduate School of Education (GSE) postdoctoral fellow with the Human-Computer Interaction (HCI) Group, sees the disparities of AI literacy being introduced as early as kindergarten, pointing out a “My First AI” book meant for kindergarteners. But according to Xie, not all parents will have the resources to educate their children on AI.
Xie drew parallels with historical patterns in edtech adoption. “Richer families, particularly in ‘WEIRD’ (Western, Educated, Industrialized, Rich, Democratic) communities like Palo Alto, are more likely to embrace AI tools,” he said. “Those disparities are almost certainly already happening.”
Denise Pope ’88 Ph.D. ’99 — a GSE Senior Lecturer specializing in curriculum studies and student engagement — emphasized the varying levels of AI literacy across age groups. “The knowledge varies. Some of them are very aware, and actually have been victims of cyber fakes or deep[fake] nudes or things like that,” she said.
She also highlighted disparities in AI access, citing that there are base level necessities, such as internet, that are required for its use. She cautioned against blanket bans on AI in schools that could worsen existing imbalances in tech exposure.
For Jessica Ann, a manager at Stanford AI Tinkery, current high school students are particularly vulnerable if education frameworks for AI literacy don’t reach them in time.
Angela Nguyen ’26, a Project Leader of Stanford AI Alignment, emphasized the growing digital divide. Nguyen, a first-generation, low-income student who went to an under-resourced high school, explained how her ADHD diagnosis made her more aware of how technology can leave vulnerable populations behind.
Nguyen further warned that the gap in AI literacy could worsen inequalities, especially for low-income communities. “Some people don’t even have Wi-Fi access… or they might not speak English as a first language,” she said, noting that many AI models are primarily trained in English. As AI continues to shape education and labor sectors, she warns that communities with limited access risk falling further behind, contributing to a “third phase of the digital divide.”
Sarah Levine, a GSE assistant professor, said that this issue transcends school funding levels. “What a well-resourced school offers is possibilities and options… but there are other dangers,” she said. Text-based AI reflects the biases of the data it has been trained on. “Students who are middle class, students who are white, are more likely to experience text-based AI as aligned with the way they speak and with the way they learn,” she said.
Selin Ozgursoy ’28, who grew up in Turkey, believes AI should be something “integrated into the curriculum as soon as possible.”
“Compared to here, Turkish students are way more illiterate [about AI], and this is totally not their fault,” she said.
A major concern for Ozgursoy is the widening gap between those who benefit from AI advancements and historically marginalized communities. “One group is being left behind, and the gap is growing by the day.” she said.
The core issue, Nguyen believes, is a systemic lack of diversity in both the teams developing technology and the audiences considered during the design process. “A lot of companies will say that they do talk to communities for, like, a PR thing, but… it’s reflected on the product of who’s actually using it,” she said.
“In terms of ethics teams at certain companies, it’s typically like the people who are part of it are the ones who really care,” Nguyen said, referencing groups like Microsoft’s Fairness, Accountability, Transparency and Ethics (FATE) team. However, she said that these teams often face funding cuts and are excluded from core product development.
When discussing the ethical responsibilities of tech companies, Emma Beharry ’26, co-president of AI Alignment, said “there’s recently been more pressure on tech companies to try and be ethical, whether it’s employees walking out or protesting when companies contract their technology for the military.”
Historically, cutting-edge technological advancements were often driven by publicly accessible research institutions. Now, partnerships between private companies like OpenAI and Google and educational institutions may be necessary to ensure effective and ethical AI adoption in schools.
Despite the challenges, Ozgursoy remains optimistic about the future of ethical AI development. “We were born into technology. We have the capability to transform AI into a human-centered, augmentative tool.”
Nguyen encouraged constructive critique of both prestigious institutions like Stanford and the broader tech industry. “Even though we’re privileged to be here, we have identities that are going to be impacted by the technologies being built,” she said.
Beharry agreed, alluding to AI Alignment’s message: “AI will change the world. Who will change AI?”
The post More people might misunderstand AI than you think. Here’s why. appeared first on The Stanford Daily.