Why harmful content keeps reaching children online – and what advertising has to do with it
Children today can encounter harmful material online with alarming ease, including violent, sexual and self-harm content. While this is often treated as a moderation failure, the deeper cause is economic.
Much of the internet is built on a business model that rewards attention above all else. In simple terms, algorithms that recommend content do not meaningfully distinguish between helpful, neutral and harmful material. Described as “topic agnostic”, their primary task is to keep users watching, scrolling and clicking.
Why? Because attention drives advertising revenue.
Most online platforms appear free to use, but they are largely funded through advertising. The longer users stay online, the more adverts they see and the more valuable they become to advertisers. As a result, platform design is shaped by what scholars call the “attention economy” – a system in which human attention is the resource being bought and sold.
Harvard scholar Shoshana Zuboff describes this model as “surveillance capitalism”: platforms collect behavioural data, predict what users will do next, and optimise systems to influence behaviour in ways that generate profit.
This matters because research consistently shows that emotionally charged content – material that provokes fear, outrage, anxiety or shock – generates higher engagement. Studies of recommender systems have found that algorithmic ranking tends to amplify content that keeps users emotionally activated, regardless of its social value (or otherwise).
For adults this can distort public debate and political discourse. For children, the consequences can be more serious because their online habits and emotional responses are still developing. Young people are more sensitive to social comparison, distressing narratives and emotionally intense material. When recommendation systems detect that a young user pauses on, searches for or engages with such content, they often respond by delivering more of it.
The result is what media researchers describe as a feedback loop. Engagement signals drive recommendations; recommendations increase exposure; exposure deepens engagement. Users are rarely targeted by a person. They are targeted by optimisation.
Public debate often assumes the solution is faster removal of harmful posts. Moderation is important, but there is a deeper issue. Harmful content continues to spread because the underlying incentives remain unchanged.
If platform revenue depends on attention, systems will always prioritise content that captures it most effectively. Removing individual posts does little if the algorithmic logic promoting engagement remains intact.
This helps explain why controversies around online harms keep resurfacing despite new safety tools and policies – and why proposed social media bans are unlikely to address the root cause. Researchers in platform governance increasingly argue that safety requires addressing system design and incentives, not just individual pieces of content.
The role of advertising – and why it matters now
Advertising rarely features in public conversations about online safety, yet it sits at the centre of the ecosystem. Advertising revenue funds recommendation systems, data collection practices and engagement optimisation strategies.
This does not mean advertisers intend harm. In fact, many brands are unaware of where their adverts appear within complex programmatic advertising supply chains. But the economic reality remains: engagement – including engagement with harmful material – generates value.
Scrutiny is growing. In the UK, regulators are implementing the Online Safety Act, lawsuits concerning social media harms are emerging internationally, and researchers are gaining access to internal platform documents through litigation. Together, these developments are lifting what has long been described as a “black box” surrounding platform decision-making.
The digital environment did not evolve naturally. It was built through choices – technical, economic and political – made over decades. And because it was designed, it can be redesigned.
The conversation now moving into public view is not simply about banning phones or blaming young users. It is about incentives. What kinds of online environments do current business models reward? And what alternatives might prioritise wellbeing alongside innovation?
For people working inside advertising and technology industries, this moment may feel particularly significant. Greater public awareness means fewer opportunities to claim that online systems are too complex to understand or influence.
If safer digital spaces are the goal, the debate must move beyond individual content towards the structures that determine why that content spreads in the first place. Understanding how advertising, data and algorithms interact is not a technical detail. It is the key to building an internet that protects children rather than profiting from their attention.
Karen Middleton is affiliated with Conscious Advertising Network