WILL AI KILL IMAGINARY FRIENDS?
By Naomi R. Aguiar and Marjorie Taylor
Will we someday have nostalgia for a time when children talked to an imaginary friend instead of an AI companion?
We don’t think so.
With the rise of generative AI tools, the tech industry is unleashing a host of AI-powered toys designed for children. But our research suggests that the AI companions entering children’s social worlds haven’t replaced the imaginary friends that many children create for themselves.
From a tiny fly that lives on a child’s thumb to a skateboard-riding Cyclops with cool hair, imaginary friends are a diverse lot. They come in all shapes and sizes, all ages, genders, and species. Often, they become an important part of children’s social worlds, providing a vehicle for storytelling, communicating with parents, and processing ongoing events in their lives.
On the surface, AI companions share many features with imaginary friends. Both types of relationships are highly personalized and customizable. They can evolve over time and impact real-world feelings and behaviors. However, interactions with AI companions are shaped by training data, as well as by tech companies’ ideas, goals, and values. Still, given these products’ sycophantic design choices, they might be pretty hard for anyone to resist—especially young children. Who wouldn’t want a highly personalized and consistently positive “friend”?
It turns out young children don’t. In a series of studies, we found that while children do find AI companions engaging, they preferred inanimate toys for companionship. In our first set of studies, 50 preschool-age children played with both a sophisticated virtual dog and a stuffed dog that matched the virtual dog in physical appearance. The preschoolers viewed the stuffed dog as a source of comfort, protection, and love—all features of high-quality friendships. They viewed the virtual dog, which could respond to them, as a source of education, entertainment, and annoyance.
We wondered if these findings would hold up if the characters were humans, and if we controlled for differences in how characters exist in the real world. So, in our next study, we gave 30 children, ages 5 to 8, photographs and descriptions of an AI child and an inanimate doll with a similar appearance. Children of all ages and genders were more likely to view the doll as offering opportunities for reciprocal love and companionship compared with the AI child.
The creative control that children exercise over characteristics, behavior, and especially the communications of their imaginary friends might be what makes them such excellent companions. With imaginary friends, children are free to explore ideas and actions of all kinds in ways appropriately aligned with their developmental needs. Imaginary friends can be provocative, neglectful, or infuriating, and can provide companionship, love, and support. Dealing with imaginary friends who act out also allows children to process the negative scenarios they encounter in everyday life. It’s a safe way of mulling things over because the imaginary friend’s bad behavior is under the child’s control. Our research shows that this form of role play has a range of benefits, such as helping children to be less shy, tell creative stories, and improve perspective-taking skills. Intervention research has also shown that imaginary friends can help children activate their own internal resources for coping with a range of stressors, from everyday anxieties and insecurities to the trauma of dysfunctional family dynamics.
Fawning, potentially addictive AI companions cannot do that work. Companies often program AI companions to reinforce children’s thoughts, beliefs, desires, and intentions in order to maximize how long they spend on the app. Many AI companions will keep the conversation going no matter what; and as they can access all that the internet has to say about negative behavior and emotions, there is the very real possibility that a child’s interactions with them could devolve in alarming directions.
We have already seen the potentially devastating impact of teen relationships with AI companions. Parents have sued both OpenAI and Character.AI for their products’ alleged contributions to two recent teenagers’ suicides. In both cases, AI companions discouraged the teens from turning to real relationships for support and helped them with their suicides.
It is not just adolescents who explore dark themes in their imaginations. In their pretend play, children imagine violence, danger, and death alongside love, happiness, and companionship. How will AI companions respond to and mirror children’s expressions, both positive and negative? Companies market specialized apps like PinwheelGPT, ChatKids, and Askie as “safe” options for children, with parental controls and filtered environments. But tests of AI-powered smart toys have shown that some safety features fail, particularly with prolonged engagement. For example, the creators of the AI-powered teddy bear Kumma recently pulled it from the market after the bear exchanged developmentally inappropriate content for a young child during testing.
Governments urgently need to regulate AI companions and other tools for minors. With the addition of appropriate precautions, AI companions might eventually become less like friends and more like tutors or mentors who guide, redirect, and reframe children’s communications. These are potentially important roles for AI companions to play. For instance, research has already shown that AI tutors can help young children learn early math skills and others can be used to address equity gaps in STEM education by building rapport with a diverse range of children. For relationships that have the openness, intimacy, and confidentiality of interactions with close real-life peers, however, we don’t think that AI companions will offer much in the way of friendship beyond what children can create for themselves.
While generations of parents have watched their kids push aside a fancy toy to play with the plain brown box it arrived in, we still tend to underestimate children’s capacity to entertain themselves and to find understanding and mastery in the open-ended possibilities of their own imaginations.
Naomi R. Aguiar holds a doctorate in psychology from the University of Oregon and currently serves as the associate director of research at Oregon State University Ecampus. She is an expert in how children and adults think about and form relationships with AI chatbots, and the imaginary relationships children and adults create for themselves.
Marjorie Taylor is professor emerit of psychology at the University of Oregon. Taylor is the editor of The Oxford Handbook of the Development of Imagination (2013), author of Imagination Companions and the Children Who Create Them (1999), and co-author (with Naomi Aguiar) of Imaginary Friends and the People Who Create Them (2024), an expanded second edition of her 1999 book. This was written for Zócalo Public Square.
ID 216959725 | Ai ©
Melpomenem | Dreamstime.com
The post WILL AI KILL IMAGINARY FRIENDS? appeared first on The Moderate Voice.