When The Internet Grew Up — And Locked Out Its Kids
In December 2025, the world crossed a threshold. For the first time ever, access to the major social media platforms was no longer guaranteed by interest, connection, or curiosity — but by a birth date. A new law in Australia decrees that people under 16 may no longer legally hold accounts on major social-media services. What began as parental warnings and optional “age checks” has transformed into something more fundamental: a formal re-engineering of the Internet’s social contract — one increasingly premised on the assumption that young people’s participation in networked spaces is presumptively risky rather than conditionally beneficial.
Australia’s law demands that big platforms block any user under 16 from having an account, or face fines nearing A$50 million. Platforms must take “reasonable steps” — and many will rely on ID checks, biometric checks, or algorithmic age verification rather than self-declared ages, which are easily falsified. The law was officially enforced in December 10, 2025, and by that date, major platforms are expected to have purged under-16 accounts or face consequences.
It’s not just Australia. Across the Atlantic, the European Parliament has proposed sweeping changes to the digital lives of minors across the European Commission’s domain. In late November 2025, MEPs voted overwhelmingly in favor of a non-binding resolution that would make 16 the default minimum age to access social media, video-sharing platforms and even AI-powered assistants — unless parental consent is given. Access for 13–15-year-olds would still be possible but only with consent.
The push is part of a broader EU effort. The Commission is working on a harmonised “age-verification blueprint app,” designed to let users prove they are old enough without revealing more personal data than necessary. The tool might become part of a future EU-wide “digital identity wallet.” Its aim: prevent minors from wandering into corners of the web designed without their safety in mind.
Several EU member states are already acting. Countries such as Denmark propose banning social media for under-15s unless parental consent is granted; others — including France, Spain and Greece — support an EU-wide “digital majority” threshold to shield minors from harmful content, addiction and privacy violations.
The harm narrative – and its limits
The effectiveness of these measures remains uncertain, and the underlying evidence is more mixed than public debate often suggests. Much of the current regulatory momentum reflects heightened concern about potential harms, informed by studies and reports indicating that some young people experience negative effects in some digital contexts — including anxiety, sleep disruption, cyberbullying, distorted self-image, and attention difficulties. These findings are important, but they do not point to uniform or inevitable outcomes. Across the research, effects vary widely by individual, platform, feature, intensity of use, and social context, with many young people reporting neutral or even positive experiences. The strongest evidence, taken as a whole, does not support the claim that social media is inherently harmful to children; rather, it points to clustered risks associated with specific combinations of vulnerability, design, and use.
European lawmakers point to studies indicating that one in four minors displays “problematic” or “dysfunctional” smartphone use. But framing these findings as proof of universal addiction risks collapsing a complex behavioral spectrum into a single moral diagnosis — one that may obscure more than it clarifies.
From the outside, the rationale feels compelling: we would never leave 13-year-olds unattended in a bar or a casino, so why leave them alone in an attention economy designed to capture and exploit their vulnerabilities? Yet this comparison quietly imports an assumption — that social media is analogous to inherently harmful adult-only environments — rather than to infrastructure whose effects depend heavily on design, governance, norms, and support.
What gets lost when we generalize harm
When harm is treated as universal, the response almost inevitably becomes universal exclusion. Nuance collapses. Differences between children — in temperament, resilience, social context, family support, identity, and need — are flattened into a single risk profile.
The Internet, however, was never meant to serve a single type of user. Its power came from universality — from its ability to give voice to the otherwise voiceless: shy kids, marginalized youth, LGBTQ+ children, rural teenagers, creative outsiders, identity seekers, those who feel alone. For many young people, social media platforms are not simply entertainment. They are places of learning, authorship, peer support, political awakening, and cultural participation. They are where teens practice argument, humor, creativity, solidarity, dissent — often more freely than in offline institutions that are tightly supervised, hierarchical, or unwelcoming.
When policymakers speak about children online primarily through the language of damage, they risk erasing these positive and formative uses. The child becomes framed not as an emerging citizen, but as a passive object of protection — someone to be shielded rather than supported, managed rather than empowered.
This framing matters because it shapes solutions. If social media is assumed to be broadly toxic, then the only responsible response appears to be removal. But if harm is uneven and situational, then exclusion becomes a blunt instrument — one that protects some children while actively disadvantaging others.
Marginalized and vulnerable youth are often the first to feel this loss. LGBTQ+ teens, for example, disproportionately report finding affirmation, language, and community online long before they encounter it offline. Young people in rural areas or restrictive households rely on digital spaces for exposure to ideas, mentors, and peers they cannot access locally. For these users, access is not a luxury — it is infrastructure.
Generalized harm narratives also obscure agency. They imply that young people are uniquely incapable of learning norms, developing judgment, or negotiating risk online — despite doing so, imperfectly but meaningfully, in every other social domain. This assumption can become self-fulfilling: if teens are denied the chance to practice digital citizenship, they are less prepared when access finally arrives. Treating youth presence online as a problem to be solved — rather than a reality to be shaped — risks turning protection into erasure. When the gate is slammed shut, a lot more than TikTok updates are lost: skills, social ties, civic voice, cultural fluency, and the slow, necessary process of learning how to exist in public.
As these policies spread from Australia to Europe — and potentially beyond — we face a world in which digital citizenship is awarded not by curiosity or contribution, but by age and identity verification. The Internet shifts from a public square to a credential-gated club.
Three futures for a youth-shaped Internet
What might this reshape look like in practice? There are three broad futures that could emerge, depending on how regulators, platforms and civil society act.
1. The Hard-Gate Era
In the first future, exclusion becomes the primary safety mechanism. More countries adopt strict minimum-age laws. Platforms build age-verification gates based on government IDs or biometric systems. This model treats youth access itself as the hazard — rather than interrogating which platform designs, incentive structures, and governance failures generate harm.
The social cost is high. Marginalized young people may lose access to vital communities and the Internet becomes something young people consume only after permission — not something they help shape.
2. The Hybrid Redesign Era
In a second future, regulatory pressure triggers transformation rather than exclusion. Age gates are narrow and specific. Platforms are forced to redesign for youth safety. Crucially, this approach assumes that harm is contingent, not inherent — and therefore preventable through design.
Infinite scroll and autoplay may be disabled by default for minors. Algorithmic amplification might be limited or made transparent. Data harvesting and targeted advertising curtailed. Privacy defaults strengthened. Friction added where needed.
Here, minors remain participants in the public sphere — but within environments engineered to reduce exploitation rather than maximize engagement at any cost.
3. The Parallel Internet Era
In the third future, bans fail to eliminate demand. Underage users migrate to obscure platforms beyond regulatory reach. This outcome highlights a central flaw in the “inherent harm” narrative: when access is blocked rather than improved, risk does not disappear — it relocates.
The harder question
There is real urgency behind these debates. Some children are struggling online. Some platform practices are demonstrably irresponsible. Some business models reward excess and compulsion. But if our response treats social media itself as the toxin — rather than asking who is harmed, how, and under what conditions — we risk replacing nuanced care with blunt control.
A digital childhood can be safer without being silent, protected without being excluded and, supported without being stripped of voice.
The question is not whether children should be online. It is whether we are willing to do the harder work: redesigning systems, reshaping incentives, and offering targeted support — instead of declaring an entire generation too fragile for the public square.
Konstantinos Komaitis is Resident Senior Fellow, Democracy and Tech Initiative, Atlantic Council