Mark Zuckerberg's content-moderation changes come after a long line of nightmares
- Content moderation has always been a nightmare for Meta.
- Its new content-moderation policy is a huge change — and it could be an improvement.
- Mark Zuckerberg's "apology tour" from the past few years seems to be officially over.
Mark Zuckerberg's changes to Meta's content-moderation policies are potentially huge.
To fully understand their gravity, it's useful to look at how Meta got here. And to consider what these changes might actually mean for users: Are they a bow to an incoming Trump administration? Or an improvement to a system that's gotten Zuckerberg and Co. lots of heat before? Or a little of both?
Content moderation has always been a pit of despair for Meta. In its blog post announcing the changes on Tuesday, Meta's new head of policy, Joel Kaplan, talked about wanting to get back to Facebook's roots in "free speech." Still, those roots contain a series of moderation fires, headaches, and constant adjustments to the platform's policies.
Starting in 2016, moderation troubles just kept coming like a bad "We Didn't Start the Fire" cover. Consider this roundup:
- Fake news.
- Macedonian teens.
- Russian troll farms.
- Hillary Clinton's team blaming Facebook for her loss.
- Zuckerberg responding to that idea by calling it "pretty crazy."
- Cambridge Analytica.
- Alleged manipulation by President Rodrigo Duterte in the Philippines.
- WhatsApp inciting lynchings in India.
- Facebook Groups spouting health misinformation involving kids drinking bleach or DIY skin-cancer treatments.
- The Rohingya Muslim genocide.
- COVID-19 misinformation.
- The Hunter Biden laptop story.
- "Stop the Steal" Facebook groups used to organize January 6, 2021.
- The Facebook Files and whistleblower leaks about Instagram and teen mental health.
Whatever your political alignment, it seems like Meta has been trapped in a vicious cycle of making a policy — or lacking a policy — then reversing itself to try to clean up a mess.
As Charlie Warzel pointed out in The Atlantic, Zuckerberg has sometimes blamed external forces when he's faced with situations like some of the ones above.
That's maybe until now. As Zuckerberg posted on Threads on Wednesday, "Some people may leave our platforms for virtue signaling, but I think the vast majority and many new users will find that these changes make the products better."
Maybe the big changes were already brewing this past September when Zuckerberg appeared at a live event and said, "One of the things that I look back on and regret is I think we accepted other people's view of some of the things that they were asserting that we were doing wrong, or were responsible for, that I don't actually think we were."
In other words, as of this week, the apology tour seems to have ended.
What will Meta's changes mean for you and me, the users?
What will the changes mean? Who knows! I can make a few predictions:
The "community note" system might work pretty well — or at least not worse than the current human- and AI-led fact-checking system.
There might be more content in your feeds that you don't like — political speech that you find abhorrent, for example.
It's also possible that while certain content might exist on the platform, you won't actually come across it because it will have been downgraded. "Freedom of speech, not freedom of reach" has been X's mantra (though considering the flow of truly vile content that has proliferated in my feed there in the past year or so, I don't think that's been particularly effective).
One other piece of the announcement is that Meta will focus its AI-powered filtering efforts on the highest-risk content (terrorism, drugs, and child endangerment). For lesser violations, the company said, it will rely more on user reports. Meta hasn't given details on how exactly this will work, but I imagine it could have a negative effect on common issues like bullying and harassment.
A large but less glamorous part of content moderation is removing "ur ugly" comments on Instagram — and that's the kind of stuff that will rely on user reporting.
It's also quite possible that bad actors will take advantage of the opening. Facebook is nothing if not a place to buy used furniture while various new waves of pillagers attempt to test and game the algorithms for profit or menace — just consider the current wave of AI slop, some of which appears at least in part to be a profitable scam operation run from outside the US.
What do the changes mean for Meta?
If these changes had been rolled out slowly, one at a time, they might have seemed like reasonable measures just on their face. Community notes? Sure. Loosening rules on certain hot political topics? Well, not everyone will like it, but Meta can claim some logic there. Decreasing reliance on automatic filters and admitting that too many non-violations have been swept up in AI dragnets? People would celebrate that.
No one thought Meta's moderation before the announced changes was perfect. There were lots of complaints (correctly) about how it banned too much stuff by mistake — which this new policy is aiming to fix.
And switching from third-party fact-checkers to a community-notes system isn't necessarily bad. The fact-checking system wasn't perfect, and community notes on X, the system Meta is modeling its own after, can be quite useful. Even acknowledging that, yes, X has sometimes become a cesspit for bad content, the root cause isn't the community notes.
Still, it's impossible to weigh the merits of each aspect of the new policy and have blinders on when it comes to the 800-pound political gorilla in the room.
There's one pretty obvious way of looking at Meta's announcement of sweeping changes to its moderation policy: It's a move to cater to an incoming Trump administration. It's a sign that Zuckerberg has shifted to the right, as he drapes himself in some of the cultural signifiers of the bro-y Zynternet (gold chain, $900,000 watch, longer hair, new style, front row at an MMA match).
Together, every piece of this loudly signals that Zuckerberg either A., genuinely believed he'd been forced to cave on moderation issues in the past, or B., knows that making these changes will please Trump. I don't really think the distinction between A and B matters too much anyway. (Meta declined to comment.)
This probably isn't the last of the changes
I try to avoid conflating "Meta" with "Mark Zuckerberg" too much. It's a big company! There are many smart people who care deeply about the lofty goals of social networking who create policy and carry out the daily work of trust and safety.
Part of me wonders how much Zuckerberg wishes this boring and ugly part of the job would fade away — there are so many more shiny new things to work on, like AI or mixed-reality smart glasses. Reworking the same decade-old policies so that people can insult each other 10% more is probably less fun than MMA fighting or talking to AI researchers.
Content moderation has always been a nightmare for Meta. Scaling it back, allowing more speech on controversial topics, and outsourcing fact-checking to the community seems like a short-term fix for having to deal with this unpleasant and thankless job. I can't help but imagine that another overhaul will come due sometime in the next four years.