{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
29
30
31
News Every Day |

Courts are finally punishing Big Tech for harming kids. Here’s the catch.

0
Vox

This week, juries in California and New Mexico dealt a pair of landmark verdicts against America’s social media giants. 

In Los Angeles, jurors awarded $6 million to a young woman who alleged that Instagram and YouTube had damaged her mental health. A day earlier, a jury in Santa Fe ruled that Meta had designed its social media platforms in a manner that harmed minors — and ordered the company to pay $375 million in recompense.

These decisions constituted a breakthrough for a legal movement that sees social media companies as the new “Big Tobacco” — an industry that knowingly peddles harmful and addictive products. And it was a triumph for advocates of “child online safety,” who believe that social media is corrosive to minors’ psychological well-being. With thousands of similar lawsuits pending, the California and New Mexico verdicts could prove to be transformative precedents.

Yet the decisions have also raised alarm bells for many free speech advocates. To organizations like FIRE — and civil libertarian writers like Reason’s Elizabeth Nolan Brown — these decisions will do more to undermine free expression online than to safeguard young people’s mental well-being. 

To better understand — and interrogate — this perspective, I spoke with Nolan Brown. We discussed how the recent verdicts could open the door to broader censorship, the evidence for social media’s psychological harms, and whether parents can sufficiently protect their kids from problematic internet use without the government’s help. Our conversation has been edited for clarity and concision.

You’ve written that these verdicts are “a very bad omen for the open internet and free speech.” How so?

One key protection for online speech is Section 230 of the Federal Communications Decency Act, which prevents online platforms from being held liable for speech they host but don’t create. 

What we’re seeing in these cases is an attempt to get around Section 230 by recharacterizing speech issues as “product liability” issues. Instead of saying, “We’re going after platforms for hosting harmful speech,” the plaintiffs are saying, “We’re going after them for negligent product design.” 

In other words, the choices that social media companies make about how to curate their feeds or encourage engagement.

Right. Some of the things they complained about were “endless scroll” (where you keep going down and the feed doesn’t stop at the end of a page), recommendation algorithms that promote content that a user is more likely to engage with, and beauty filters.

But ultimately, if you look at what they’re actually going after, it comes down to speech. When you talk about TikTok or YouTube being so engaging that it’s “addictive,” you’re talking about content: No matter how TikTok’s algorithm is designed, it wouldn’t be compelling to people if the content wasn’t compelling. 

Similarly, in the California case, the plaintiff argued that Meta allowing beauty filters on images was a negligent product design, since they promote unrealistic beauty standards, which caused her to develop body image issues. 

But that really just comes back to speech: The choice to use a filter is something that individual users do to express themselves. Providing those tools for users is a form of speech.

But aren’t many of these product design choices content-neutral? A defender of these verdicts might argue: Social media companies are manipulating minors into compulsively using their platforms, in a manner that’s bad for their mental health. And they’re doing this, in part, through push notifications, autoplaying videos, and endlessly scrolling feeds. So, why can’t we legally restrict their use of those features — without constraining the kinds of speech they’re allowed to platform?

Some people will say, “Why don’t we limit notifications — or kick people off after an hour — if they’re minors?” But in order to implement any set of rules or product design choices just for young people, these platforms would need to have a foolproof way of knowing who is a minor and who is an adult. 

And that means age verification procedures, where they’re either checking everyone’s government-issued ID, or they’re using biometric data — or something else that requires everyone to submit identification before they can speak anywhere on the internet.

And that creates a lot of problems. It makes people’s data more vulnerable to identity theft, hackers, and scammers. It also means that your identity is tied to everything you do online. And that can be dangerous, especially for people who are talking about sensitive issues or protesting the government. The ability to speak and organize online anonymously is very important. 

What if the product design restrictions applied to adults and minors alike? If we barred social media companies from issuing push notifications for everyone, that would avoid the age verification issue, right?

Many platforms give people the tools to do these things already. You can turn autoplay off. You can have a chronological feed. You can tailor your settings so that you don’t have these features.

If we’re saying, “Why can’t the government mandate these options?” I think that’s a very slippery slope. You might think, “Okay, who cares about push notifications? Why can’t the government just mandate that they not do push notifications?” But the rationale for that gets us into much broader territory. 

It’s effectively saying: Since some people will have a problem with this, the government must micromanage the way that the product is made. Yet people can use all sorts of products in a problematic way: Fitness regimes, streaming services, food. And we’re not saying like, okay, the government gets to step in and tell these companies exactly how to do business in the way that would be least harmful to people. And that attitude is particularly dangerous when we’re talking about products involving speech. 

A skeptic might argue that the slope here isn’t actually that slippery. After all, the government has already shown that it can enact targeted, content-neutral restrictions on speech without triggering a cascade of censorship.

For example, since 1990, there have been limits on the amount of advertising that can air during children’s programming in a given hour — and also a requirement that ads and content be clearly separated. Those measures are arguably more intrusive on speech than, say, banning autoplay of videos on a social media platform. And yet, the Children’s Television Act of 1990 didn’t lead to any really sweeping constraints on First Amendment rights. 

I just think it makes a big difference if you’re talking about restricting speech for minors and restricting it for adults. And what you were just mentioning were restrictions that would apply to everybody. 

Beyond the First Amendment issues, you’ve expressed some skepticism about the specific causal claims made by plaintiffs in these cases: Specifically, that social media caused their mental health difficulties. Yet many social psychologists — most prominently Jonathan Haidt — have argued that these platforms are corrosive to children’s psychological being. So, why do you think the allegations here are overstated?

In the California case specifically, this young woman is alleging that, because she was on social media since she was very young, she developed mental health issues. But there was a lot of testimony showing that there were many other things going wrong in her life. She was exposed to domestic violence. She had troubles with her parents, troubles at school. 

So the idea that social media directly caused her difficulties — rather than these life stressors that are well-known to cause harm — I think that’s kind of suspect.

And I think you see this problem in the broader research on social media’s mental health impacts. There’s often a correlation between depressive symptoms and heavy social media use because people who are having a difficult time at home and at school — people who are socially isolated — tend to use social media more than people in better circumstances. 

How much do your views on the regulation of social media hinge on skepticism about the actual harms of these platforms? If we acquired evidence that there really were major impacts here — that autoplay and beauty filters were dramatically worsening kids’ mental health — would you support legal restrictions on these features? Or would First Amendment considerations override public health concerns, irrespective of the evidence?

The strength of the evidence is important for guiding the decision-making of individuals, parents, families, communities, and school districts. But even if we knew that beauty filters caused a lot of harm, the government still would not be justified in banning them, since they are avenues for speech. Plenty of people are not harmed by them. 

There are so many things that harm some people, but that are useful to others. And I don’t think the existence of problematic use justifies banning those things for everyone.

I think talk of social media “addiction” can be unhelpful on this front. That language suggests that this is something that’s automatically harmful for everyone. And that just isn’t the case. Plenty of people use social media in a healthy way, in the same way that countless people can drink alcohol without it harming them, or eat a bag of chips without bingeing on them. 

I think it’s the same way with social media. This is a technology that can harm some people, particularly those who already have psychological issues. 

But it isn’t this addictive substance or a poison where you can’t even be exposed to it, or else. I think that view imbues smartphones with an almost mystical quality.

There are many cases, though, where we choose to heavily regulate a substance or practice — not because it harms everyone who engages with it — but rather, because it imposes massive harms on a minority of problem users. Gambling and alcohol are two examples. But even with opioids, many people can pop some pills and never develop a dependency. Yet some end up addicted and dying of overdoses. And for that reason, we heavily restrict access to opioids. 

So, I feel like the question here might be less about whether social media is bad for everyone than whether it has truly large harms for problem users.

I think there are people who talk about it the way you do. But others describe social media as if it’s something that people are powerless against. But yes, I don’t think we have strong evidence that this is harmful in the way that addictive substances are. In fact, I think the evidence is really mixed. Some studies suggest that moderate smartphone use is actually correlated with better mental health outcomes.

You argue that, instead of seeking government restrictions on social media, parents should exercise more responsibility over their kids’ use of smartphones and apps. 

Many parents argue that their capacity to monitor their children’s social media use is really limited and that they lack the tools to protect their kids from the harmful effects of these platforms. What would you say to them?

I think this is straightforward with very young children. Like, why is a 6-year-old having unfettered alone time on a digital device? In the California case, the plaintiff was using social media as a very young child. And at that age, parents definitely have control over what their kids do and see online; you can control whether your kid has access to a smartphone. With adolescents, there are areas where tech companies are working with parents. We’ve seen more parental controls being introduced in recent years. We’ve seen Meta roll out specific accounts for minors that have some restrictions on them. We’ve seen things like the introduction of phones that allow basic texting but not certain apps. So, I think private solutions are possible here. I think we can address people’s legitimate concerns without having the government infringe on free expression.

Ria.city






Read also

A woman’s uterus has been kept alive outside the body for the first time

Get productive with work from home essentials on sale at Amazons spring sale: Standing desks, monitors, more

Formula 1: Antonelli and Russell 1-2 on the pole for Japanese Grand Prix

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости