{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
28
29
30
31
News Every Day |

From Meta and YouTube to AI: Why Platforms Need Limits on Addictive Design

To limit algorithmic addiction, regulators should set clear thresholds for how much compulsive engagement AI systems are allowed to create.

The recent US case finding Meta and YouTube negligent in a social media addiction lawsuit marks a profound turning point. For the first time, a court accepted the proposition that harm from digital platforms may arise not primarily from user-generated content, but from the architecture of the systems themselves. By shifting the locus of liability from speech to design, the case begins to carve out a new domain in law: algorithmic product liability.

Its significance extends well beyond tort doctrine. It signals the early emergence of a governance problem that law has not yet fully conceptualized: how to regulate systems that optimize human behavior at scale. The plaintiffs’ argument that platforms engineered compulsive engagement loops, especially among vulnerable users, implicitly reframes “addiction” from a substance-based pathology into a systemically induced behavioral condition.

This piece builds on that conceptual shift. It proposes a framework for understanding and governing what may be called “algorithmic addiction,” introduces a measurable Addiction Intensity Index, and advances a policy standard centered on a minimum acceptable level of addiction intensity. The core question is simple but profound: if platforms can shape human behavior, how should society set limits on that power?

From Substance Addiction to Algorithmic Addiction

Traditional understandings of addiction focus on substances—alcohol, nicotine, or drugs—that create chemical dependency. But the emerging evidence, and the logic of the MetaYouTube case, point toward a broader phenomenon. Addiction can arise not only from what people consume, but from how systems shape their behavior.

The key distinction is between short-term reward and long-term well-being. Human beings are naturally drawn to immediate gratification, novelty, social validation, emotional stimulation, etc. But these short-term rewards do not always align with long-term interests such as mental health, productivity, and autonomy.

Algorithmic systems exploit this gap. By continuously learning what captures attention, they deliver content that maximizes immediate engagement. Over time, this can create a pattern in which users spend more and more time on a platform even as their overall well-being declines.

This is what we mean by algorithmic addiction: a condition in which engagement increases while long-term welfare deteriorates, driven by the adaptive interaction between user and system. Importantly, addiction in this sense is not simply a property of the individual. It is a property of the system-user relationship.

This leads to a deeper structural insight: modern AI systems do not merely respond to human preferences; they actively shape them. In doing so, they create a feedback loop in which both the system and the user are continuously adapting to each other. The outcome of that loop can be either beneficial or harmful, depending on how it is designed.

Measuring Addiction: The Addiction Intensity Index

If algorithmic addiction is to be governed, it must first be made measurable. This is not straightforward, because addiction is not a single phenomenon. It has behavioral, psychological, and welfare dimensions.

The proposed Addiction Intensity Index combines these dimensions into a single framework.

The first dimension is behavioral. This includes observable patterns such as how long users stay on a platform, how frequently they return, and whether they consistently exceed their intended usage. A key signal is the inability to stop, that is, when users continue engaging despite intending not to.

The second dimension is psychological. This captures the subjective experience of compulsion: the feeling of being unable to disengage, the anxiety or discomfort that arises when access is removed, and the sense that the platform has become difficult to resist.

The third dimension, and the most important, is welfare. This asks whether platform use is crowding out higher-value activities or contributing to negative outcomes such as sleep disruption, stress, or regret. Surveys that ask users whether they spent more time than intended, or whether they feel worse after use, can serve as practical proxies.

The critical point is that addiction is not simply high usage. It is high usage combined with loss of control and declining well-being. The Addiction Intensity Index is designed to capture that distinction.

From Measurement to Regulation: A Minimum Addiction Standard

Once addiction can be measured, it becomes possible to set limits. A Minimum Addiction Intensity Threshold, a benchmark that defines the maximum level of addiction a system is allowed to induce, should be established.

This approach does not seek to eliminate engagement or enjoyment. Rather, it sets a boundary: platforms may optimize for attention, but only within limits that protect user welfare.

To be effective, this standard must be integrated into the entire lifecycle of AI systems.

At the design stage, developers should assess whether features such as infinite scroll, autoplay, and algorithmic personalization are likely to create compulsive use patterns. At the training stage, systems should be optimized not only for engagement but also for minimizing addictive dynamics. At the testing stage, platforms should conduct controlled experiments to evaluate addiction risk. After deployment, continuous monitoring and independent audits should ensure ongoing compliance.

This represents a shift from reactive to proactive governance. Instead of waiting for harm to occur and then assigning liability, regulation would shape the design of systems from the outset.

Trust, Adoption, and the Strategic Incentive for Social Platforms

It is natural to assume that platforms will resist such a standard. After all, their business models are built on maximizing engagement. Any constraint on addictive design appears, at first glance, to threaten revenue.

But this assumption overlooks a critical dynamic: the growing erosion of trust.

Public concern about algorithmic addiction is no longer marginal. Parents restrict their children’s access to social media. Users attempt to limit their own usage through digital detoxes. Governments and institutions consider bans or restrictions. These responses reflect a deeper anxiety that platforms are not acting in the user’s interest.

This trust deficit is already limiting the growth and legitimacy of the industry.

A platform that credibly demonstrates adherence to a minimum addiction standard would fundamentally change this dynamic. By showing that user welfare is built into its design, it would reduce fear and resistance. Trust, once restored, expands the potential user base and stabilizes long-term engagement.

In the short term, reducing addictive features may decrease the intensity of use. But in the long term, it can increase the number of users and the sustainability of engagement. The result is a shift from a fragile, extractive model to a more durable, trust-based model.

If properly calibrated, this approach offers a rare possibility: a genuine win–win outcome in which society benefits from reduced harm while platforms benefit from increased legitimacy and adoption.

The Central Tension: User Welfare and Platform Incentives

The central challenge is to balance two competing forces. On one side are platform incentives, which favor maximizing engagement. On the other hand, social welfare requires limiting the harmful effects of excessive engagement.

Left to themselves, platforms will tend to prioritize short-term engagement because it directly drives revenue. But this creates a negative externality: the social costs of addiction are not fully borne by the platforms themselves.

The role of governance is to correct this imbalance. By setting a minimum addiction standard, regulators effectively redefine the boundaries within which platforms can optimize. The goal is not to eliminate incentives, but to align them with societal interests.

This is similar to how environmental regulations limit pollution without banning industrial activity, or how financial regulations constrain risk without eliminating markets. In each case, the objective is to preserve innovation and growth while preventing systemic harm.

The key to success lies in finding the right balance. If the standard is too strict, it may stifle innovation and user enjoyment. If it is too weak, it will fail to address the problem. The challenge is to identify a level that protects welfare while allowing platforms to thrive.

Feasibility and Strategic Implications for Platform Governance and User Protection

From a social perspective, the case for intervention is strong. The issue is fundamentally about autonomy: whether individuals retain control over their time, attention, and choices.

From a technological perspective, the framework is feasible, though imperfect. Platforms already collect detailed data on user behavior. The main difficulty lies in measuring well-being, which is inherently more complex. Nevertheless, practical proxies such as regret and time inconsistency provide a viable starting point.

From a political perspective, momentum is building but unevenly. Europe has begun to regulate addictive design features. China has imposed limits on youth usage, though for different reasons. In the United States, the legal system is only beginning to engage with these questions, as reflected in the Meta–YouTube case.

Industry resistance will be significant. But as the trust argument suggests, forward-looking firms may come to see compliance not as a burden but as a competitive advantage.

Governing Optimization in AI Systems and Platform Design

The rise of artificial intelligence (AI) systems marks a fundamental shift. These systems do not merely serve human preferences; they shape them. As a result, governance must move beyond regulating outcomes to regulating the process of optimization itself.

The framework proposed here—centered on measuring and limiting addiction intensity—offers one way to do so. It translates a moral concern into a practical standard, making it possible to govern a complex and evolving problem.

The Meta and YouTube case may ultimately be remembered as the moment when this issue came into focus. It forces us to confront a new reality: when technology can engineer compulsion, society must learn to engineer limits.

The future of AI governance will depend on how well we meet that challenge.

About the Author: Jianli Yang

Dr. Jianli Yang is a distinguished visiting fellow at the Center for National Interest and a columnist for National Review. He is the founder and president of the Citizen Power Initiatives for China and author of For Us, The Living: A Journey to Shine the Light on Truth and It’s Time for a Values-Based “Economic NATO.” He was a Tiananmen student leader and a political prisoner of China.

The post From Meta and YouTube to AI: Why Platforms Need Limits on Addictive Design appeared first on The National Interest.

Ria.city






Read also

Damar Hamlin staying with Bills on 1-year deal

Elizabeth Warren Targets MrBeast: Sends 12-Page Letter Demanding Answers on Crypto Push to Children

Man jailed seven years for drug possession with intent to supply

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости