The Good, The Bad, And The Stupid In Meta’s New Content Moderation Policies
When the NY Times declared in September that “Mark Zuckerberg is Done With Politics,” it was obvious this framing was utter nonsense. It was quite clear that Zuckerberg was in the process of sucking up to Republicans after Republican leaders spent the past decade using him as a punching bag on which they could blame all sorts of things (mostly unfairly).
Now, with Trump heading back to the White House and Republicans controlling Congress, Zuck’s desperate attempts to appease the GOP have reached new heights of absurdity. The threat from Trump that he wanted Zuckerberg to be jailed over a made-up myth that Zuckerberg helped get Biden elected only seemed to cement that the non-stop scapegoating of Zuck by the GOP had gotten to him.
Since the election, Zuckerberg has done everything he can possibly think of to kiss the Trump ring. He even flew all the way from his compound in Hawaii to have dinner at Mar-A-Lago with Trump, before turning around and flying right back to Hawaii. In the last few days, he also had GOP-whisperer Joel Kaplan replace Nick Clegg as the company’s head of global policy. On Monday it was announced that Zuckerberg had also appointed Dana White to Meta’s board. White is the CEO of UFC, but also (perhaps more importantly) a close friend of Trump’s.
All this seems pretty damn political.
Then, on Tuesday morning, Zuckerberg posted a video about the changing content moderation practices across Meta’s various properties.
Some of the negative reactions to the video are a bit crazy, as I doubt the changes are going to have that big of an impact. Some of them may even be sensible. But let’s break them down into three categories: the good, the bad, and the stupid.
The Good
Zuckerberg is exactly right that Meta has been really bad at content moderation, despite having the largest content moderation team out there. In just the last few months, we’ve talked about multiple stories showcasing really, really terrible content moderation systems at work on various Meta properties. There was the story of Threads banning anyone who mentioned Hitler, even to criticize him. Or banning anyone for using the word “cracker” as a potential slur.
It was all a great demonstration for me of Masnick’s Impossibility Theorem of dealing with content moderation at scale, and how mistakes are inevitable. I know that people within Meta are aware of my impossibility theorem, and have talked about it a fair bit. So, some of this appears to be them recognizing that it’s a good time to recalibrate how they handle such things:
In recent years we’ve developed increasingly complex systems to manage content across our platforms, partly in response to societal and political pressure to moderate content. This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable. Too much harmless content gets censored, too many people find themselves wrongly locked up in “Facebook jail,” and we are often too slow to respond when they do.
Leaving aside (for now) the use of the word “censored,” much of this isn’t wrong. For years it felt that Meta was easily pushed around on these issues and did a shit job of explaining why it did things, instead responding reactively to the controversy of the day.
And, in doing so, it’s no surprise that as the complexity of its setup got worse and worse, its systems kept banning people for very stupid reasons.
It actually is a good idea to seek to fix that, and especially if part of the plan is to be more cautious in issuing bans, it seems somewhat reasonable. As Zuckerberg announced in the video:
We used to have filters that scanned for any policy violation. Now, we’re going to focus those filters on tackling illegal and high-severity violations, and for lower-severity violations, we’re going to rely on someone reporting an issue before we take action. The problem is that the filters make mistakes, and they take down a lot of content that they shouldn’t. So, by dialing them back, we’re going to dramatically reduce the amount of censorship on our platforms. We’re also going to tune our content filters to require much higher confidence before taking down content. The reality is that this is a trade-off. It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.
Zuckerberg’s announcement is a tacit admission that Meta’s much-hyped AI is simply not up to the task of nuanced content moderation at scale. But somehow that angle is getting lost amidst the political posturing.
Some of the other policy changes also don’t seem all that bad. We’ve been mocking Meta for its “we’re downplaying political content” stance from the last few years as being just inherently stupid, so it’s nice in some ways to see them backing off of that (though the timing and framing of this decision we’ll discuss in the latter sections of this post):
We’re continually testing how we deliver personalized experiences and have recently conducted testing around civic content. As a result, we’re going to start treating civic content from people and Pages you follow on Facebook more like any other content in your feed, and we will start ranking and showing you that content based on explicit signals (for example, liking a piece of content) and implicit signals (like viewing posts) that help us predict what’s meaningful to people. We are also going to recommend more political content based on these personalized signals and are expanding the options people have to control how much of this content they see.
Finally, most of the attention people have given to the announcement has focused on the plan to end the fact-checking program, with a lot of people freaking out about it. I even had someone tell me on Bluesky that Meta ending its fact-checking program was an “existential threat” to truth. And that’s nonsense. The reality is that fact-checking has always been a weak and ineffective band-aid to larger issues. We called this out in the wake of the 2016 election.
This isn’t to say that fact-checking is useless. It’s helpful in a limited set of circumstances, but too many people (often in the media) put way too much weight on it. Reality is often messy, and the very setup of “fact checking” seems to presume there are “yes/no” answers to questions that require a lot more nuance and detail. Just as an example of this, during the run-up to the election, multiple fact checkers dinged Democrats for calling Project 2025 “Trump’s plan”, because Trump denied it and said he had nothing to do with it.
But, of course, since the election, Trump has hired on a bunch of the Project 2025 team, and they seem poised to enact much of the plan. Many things are complex. Many misleading statements start with a grain of truth and then build a tower of bullshit around it. Reality is not about “this is true” or “this is false,” but about understanding the degrees to which “this is accurate, but doesn’t cover all of the issues” or deal with the overall reality.
So, Zuck’s plan to kill the fact-checking effort isn’t really all that bad. I think too many people were too focused on it in the first place, despite how little impact it seemed to actually have. The people who wanted to believe false things weren’t being convinced by a fact check (and, indeed, started to falsely claim that fact checkers themselves were “biased.”)
Indeed, I’ve heard from folks at Meta that Zuck has wanted to kill the fact-checking program for a while. This just seemed like the opportune time to rip off the band-aid such that it also gains a little political capital with the incoming GOP team.
On top of that, adding in a feature like Community Notes (née Birdwatch from Twitter) is also not a bad idea. It’s a useful feature for what it does, but it’s never meant to be (nor could it ever be) a full replacement for other kinds of trust & safety efforts.
The Bad
So, if a lot of the functional policy changes here are actually more reasonable, what’s so bad about this? Well, first off, the framing of it all. Zuckerberg is trying to get away with the Elon Musk playbook of pretending this is all about free speech. Contrary to Zuckerberg’s claims, Facebook has never really been about free speech, and nothing announced on Tuesday really does much towards aiding in free speech.
I guess some people forget this, but in the earlier days, Facebook was way more aggressive than sites like Twitter in terms of what it would not allow. It very famously had a no nudity policy, which created a huge protest when breastfeeding images were removed. The idea that Facebook was ever designed to be a “free speech” platform is nonsense.
Indeed, if anything, it’s an admission of Meta’s own self-censorship. After all, the entire fact-checking program was an expression of Meta’s own position on things. It was “more speech.” Literally all fact-checking is doing is adding context and additional information, not removing content. By no stretch of the imagination is fact-checking “censorship.”
Of course, bad faith actors, particularly on the right, have long tried to paint fact-checking as “censorship.” But this talking point, which we’ve debunked before, is utter nonsense. Fact-checking is the epitome of “more speech”— exactly what the marketplace of ideas demands. By caving to those who want to silence fact-checkers, Meta is revealing how hollow its free speech rhetoric really is.
Also bad is Zuckerberg’s misleading use of the word “censorship” to describe content moderation policies. We’ve gone over this many, many times, but using censorship as a description for private property owners enforcing their own rules completely devalues the actual issue with censorship, in which it is the government suppressing speech. Every private property owner has rules for how you can and cannot interact in their space. We don’t call it “censorship” when you get tossed out of a bar for breaking their rules, nor should it be called censorship when a private company chooses to block or ban your content for violating its rules (even if you argue the rules are bad or were improperly enforced.)
The Stupid
The timing of all of this is obviously political. It is very clearly Zuckerberg caving to more threats from Republicans, something he’s been doing a lot of in the last few months, while insisting he was done caving to political pressure.
I mean, even Donald Trump is saying that Zuckerberg is doing this because of the threats that Trump and friends have leveled in his direction:
I raise this mainly to point out the ongoing hypocrisy of all of this. For years we’ve been told that the Biden campaign (pre-inauguration in 2020 and 2021) engaged in unconstitutional coercion to force social media platforms to remove content. And here we have the exact same thing, except that it’s much more egregious and Trump is even taking credit for it… and you won’t hear a damn peep from anyone who has spent the last four years screaming about the “censorship industrial complex” pushing social media to make changes to moderation practices in their favor.
Turns out none of those people really meant it. I know, not a surprise to regular readers here, but it should be called out.
Also incredibly stupid is this, which we’ll quote straight from Zuck’s Threads thread about all this:
That’s Zuck saying:
Move our trust and safety and content moderation teams out of California, and our US content review to Texas. This will help remove the concern that biased employees are overly censoring content.
There’s a pretty big assumption in there which is both false and stupid: that people who live in California are inherently biased, while people who live in Texas are not. People who live in both places may, in fact, be biased, though often not in the ways people believe. As a few people have pointed out, more people in Texas voted for Kamala Harris (4.84 million) than did so in New York (4.62 million). Similarly, almost as many people voted for Donald Trump in California (6.08 million) as did so in Texas (6.39 million).
There are people with all different political views all over the country. The idea that everyone in one area believes one thing politically, or that you’ll get “less bias” in Texas than in California, is beyond stupid. All it really does is reinforce misguided stereotypes.
The whole statement is clearly for political show.
It also sucks for Meta employees who work in trust & safety, who want access to certain forms of healthcare or want net neutrality, or other policies that are super popular among voters across the political spectrum, but which Texas has decided are inherently not allowed.
Finally, there’s this stupid line in the announcement from Joel Kaplan:
We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate. It’s not right that things can be said on TV or the floor of Congress, but not on our platforms.
I’m sure that sounded good to whoever wrote it, but it makes no sense at all. First off, thanks to the Speech and Debate Clause, literally anything is legal to say on the floor of Congress. It’s like the one spot in the world where there are no rules at all over what can be said. Why include that? Things could literally be said on the floor of Congress that would violate the law on Meta platforms.
Also, TV stations literally have restrictions known as “standards and practices” that are way, way, way more restrictive than any set of social media content moderation rules. Neither of these are relevant metrics to compare to social media. What jackass thought that using examples of (1) the least restricted place for speech and (2) a way more restrictive place for speech made this a reasonable argument to make here?
In the end, the reality here is that nothing announced this week will really change all that much for most users. Most users don’t run into content moderation all that often. Fact-checking happens but isn’t all that prominent. But all of this is a big signal that Zuckerberg, for all his talk of being “done with politics” and no longer giving in to political pressure on moderation, is very engaged in politics and a complete spineless pushover for modern Trumpist politicians.