Meta Will Now Show Parents What Their Teens Are Asking AI
While parents tend to be freaked out by AI and how it’s been infiltrating their kids’ lives, teens are largely nonchalant, already relying on the tool for tasks from homework to companionship, according to recent reports. Today, Meta announced a new feature for its teen accounts that aims to bridge that gap — mainly by giving parents access to their teen’s Meta AI search history.
“This is just the starting point,” noted a blog post from Meta on the new feature, which was tested in October and is also now available to parents in the UK, Canada, Australia, and Brazil. “As we roll out these insights to parents around the world, we’ll keep listening to feedback from both parents and
experts and explore ways to make them even more valuable.”
Now, parents supervising Teen Accounts on Facebook, Messenger, or Instagram will see a new “Insights” tab (within supervision), both in-app and on the web. From there, parents will be able to see the topics — such as Lifestyle (fashion, food, holidays) or Health and Wellbeing (fitness, physical health, mental health) — their teen has been asking Meta AI about over the past week.
The new feature is just the latest safety protection built into Teen Accounts. Meta AI, for example, “should not give age-inappropriate responses that would feel out of place in a 13+ movie,” the press announcement notes. “This means Meta AI may not answer certain questions, and in some cases may direct teens to resources instead.” Parents will be able to see the topic of their teen’s questions, even if no answer was given to them.
In February, Meta announced it was developing alerts to let parents know if their teen has searched for issues related to suicide or self-harm; those have not yet rolled out but, says the latest news release, “We’ll have more to share on those alerts soon.”
As part of the new AI feature, Meta is also providing conversation starters, developed in partnership with the Cyberbullying Research Center, to help parent have informative, non-judgmental conversations with their teens about AI. They’re available on the Family Center website, and parents can also access them through a link in the new Insights tab.
Finally, the company is forming an AI Wellbeing Expert Council, a group of advisors — from the National Council for Suicide Prevention, the University of Michigan, and other schools — who provide ongoing input regarding Meta AI experiences for teens to keep them safe and age-appropriate.
This week’s announcement comes just a month after Meta was found negligent in landmark social media case in both New Mexico and California. Following those verdicts, Meta whistleblower Kelly Stonelake wrote that the company’s response to the trial “has been like its response to every accountability moment. Announce a new safety feature. Generate thousands of pieces of press. Offer parents the appearance of control.”
Stonelake pointed to a recent independent evaluation of Instagram’s teen accounts that tested 47 of 53 listed safety features, finding that 64% were either no longer available or ineffective and only 17% worked as described.
“When the tobacco industry faced evidence that cigarettes caused cancer, it responded with light cigarettes and cartoon mascots,” she continued. “Teen Accounts are the modern equivalent: a sop to worried parents and regulators, designed to preserve profit while avoiding real accountability.”
So, will the new AI safety features work to protect our teens? That jury, at least, is still out.