This AI Tool Isn’t Safe for Teens, New Study Finds
If you’ve heard your teen mention “Grok” and assumed it was just another AI chatbot like ChatGPT, you’re not alone. But according to a new risk assessment from Common Sense Media, Grok isn’t just another AI tool, and knowing the difference is important when it comes to kids’ safety.
If you’re not in-the-know, Grok is an AI chatbot developed by xAI and built directly into X, the social platform formerly known as Twitter. It can answer questions, generate images and videos, carry on text or voice conversations, and even simulate relationships through AI “companion” characters. Because it lives inside a major social platform, anything Grok produces can be shared publicly with just a click. This design choice dramatically raises the stakes for younger users, says Common Sense Media, who evaluates media and technology for families.
In its report published today, Common Sense Media concluded that Grok is not safe for users under 18.
The nonprofit organization cites the following risks, which can occur even when safety features are turned on:
- Failures in age detection
- Weak safety controls
- Repeated exposure to explicit and harmful material
X.ai did not respond to a request for comment from SheKnows, except with what appeared to be an automated email saying, “Legacy media lies.”
RELATED: Meta Temporarily Blocks Teens From AI Characters: What Parents Need to Know
The Root of Grok’s AI-Companion Problem
According to Common Sense Media, Grok stood out during testing not because it was the only AI chatbot with problems, but because of how many risks overlapped at once. Researchers evaluated Grok through its mobile app, website, and public @grok account on X. They used test profiles meant to represent teen users for their research, while testing multiple settings (like default mode, Kids Mode, and alternative behavior modes). Common Sense Media also tried image generation and companion features.
In their tests, researchers found that Grok does not reliably identify when a user is under 18. As a result, teens can access the same features designed for adults. Some of these modes intentionally push boundaries. Think: engaging in erotic role play and sexualized conversations, and generating disturbing content.
Robbie Torney, Head of AI and Digital Assessments at Common Sense Media, said in the report that while safety gaps exist across many AI platforms, Grok’s are unusually interconnected. According to Torney, Grok’s ineffective Kids Mode, permissive content generation, and instant public sharing create “a perfect storm” for harm when teens are involved
Common Sense Media also found that Grok would produce inappropriate material even when Kids Mode was enabled. During testing, the chatbot generated sexually violent language, biased responses, and detailed explanations of dangerous ideas in response to user prompts that teens could reasonably enter out of curiosity or experimentation.
Subtly, according to the assessment, Grok also failed to appropriately respond to signs of serious mental distress. In some cases, the AI tool reinforced actually harmful beliefs rather than redirecting users toward help or neutral information
Deepfakes and the Risk of Going Viral
And if the inappropriate interactions we just described is not alarming enough for you, Common Sense Media also found that Grok-generated content can circulate. Because Grok operates as its own account within X, AI-generated images and responses can be posted publicly and spread widely within minutes.
According to Common Sense Media, Grok has been used to generate and distribute nonconsensual deepfake images of actual people, including minors.
The way Grok’s company xAI responded to these issues is also unsettling. Common Sense Media found that some features linked to abuse were placed behind a paywall rather than removed entirely.
Bottom Line
For parents already overwhelmed by the pace of new tech like AI, Grok represents a reminder that not all AI tools are created with kids in mind. We’re bringing you this story as a cautionary tale before something bad happens. Common Sense Media advises families to avoid allowing minors to use Grok. Check in with your child to understand their use of social networks, and AI. Your aim should be to provide guardrails and education for your child since, clearly, these AI companies fail to do just that.