The New Shadowbanning Panic
Over the past several days, TikTok users have found themselves at a loss. Literally, I mean: They lost their audiences, and their view counts showed “0.” Some people who attempted to upload content about anti-ICE protests or the killing of Alex Pretti alleged that the platform was intentionally blocking them from doing so. Others were able to get their videos uploaded, but alleged that TikTok was not distributing them. Still others noticed that they were unable to send the word Epstein in a direct message, a quirk so bizarre that it incited California Governor Gavin Newsom to repost a screenshot shared by an anonymous X account using the handle @intelligentpawg.
For many of these people, the explanation was obvious: “MAGA censorship.” Newsom said in his post that he would be launching a review “into whether TikTok is violating state law by censoring Trump-critical content,” and the concern isn’t totally random. TikTok’s U.S. business changed hands just last week, spinning off from the Chinese company ByteDance—as required by a 2024 law—and into a new organization called the TikTok USDS Joint Venture LLC. Major investors in the new entity include Oracle, which was co-founded by Larry Ellison, a Trump ally whose son is in charge of one of the country’s largest media conglomerates and is also cozy—or attempting to be cozy—with the president. Is it so far-fetched to imagine they would tweak the platform in his favor?
Jamie Favazza, a spokesperson for TikTok USDS Joint Venture, wrote to me in an email that the app is being run as it was before and that American users will “continue to have the same experience they already know and love.” The company has blamed the issues on a power outage at one of its data centers, which it said caused a “cascading” systems failure affecting all types of content (not just posts about Minnesota). The data center in question is operated by Oracle, which has managed TikTok’s U.S. user data since 2022. The company was still working to fix some of the bugs as of today. Separately, the company said that there is no rule against saying Epstein and that this glitch was caused by a technical issue with its safety systems. (When I sent the word Epstein to my fiancé at 8 yesterday morning, it went through fine.)
Unsurprisingly, a lot of people are not totally buying the explanation. Trump has joked that he would make the app’s algorithm “100 percent MAGA” if he could, and it’s true that aspects of the broader media ecosystem have shifted dramatically in Trump’s favor: starting with Elon Musk’s takeover of X, then major TV networks’ capitulation to Trump, and now TikTok’s transfer to Trump-friendly investors. And Trump has never been shy about applying pressure to companies in order to satisfy his own whims.
Then factor in the stakes of the moment. As federal agents threaten the basic principles of democracy in Minnesota, Americans are looking to their phones for up-to-date and on-the-ground information about a complicated, ongoing event. For better or worse, millions of Americans use TikTok for news: Any whisper of intervention or suppression is natural cause for concern. (With everything else going on, people haven’t forgotten about the Trump administration’s bizarre handling of the Jeffrey Epstein files.)
[Jonathan Rauch: Yes, it’s fascism]
That’s a perfect storm for paranoia about TikTok’s actions, and the platform is not aided by recent history. In 2020, the company apologized in response to an outcry about inexplicably low view counts on videos about Black Lives Matter. It also cited a technical glitch in that case, and some people pointed out that other popular, politically neutral hashtags (such as #cat) were affected, too, but suspicion lingered. More recently, some American TikTok users have felt directly censored by their government—leading up to the legislation that forced the app’s sale, several lawmakers blamed TikTok for, in their view, warping the minds of young people and making them overly critical of Israel, and cited this as a reason to regulate it.
Feed-based, view-based social-media platforms are central to American political discourse, which is why politicians so often fight over the details of their operation. In recent memory, it was more often Republicans calling for investigations of platform censorship, shadowbanning, and collusion between the White House and social-media companies. Back then, efforts by researchers to study the problem generally found that there was no blanket, pervasive bias against right-leaning viewpoints per se, although right-wing users were more likely to spread certain types of low-quality content, namely misinformation, which made them more likely to be penalized and therefore affirmed their feeling of being silenced.
Now we’re in a counterintuitive and paradoxical stage of content moderation, where some spaces are more chaotic than ever and others are more restricted in highly specific ways. On the one hand, you have Musk’s X, which has removed most guardrails from public discourse, up to the point of enabling users to generate nude images of their political foes; on the other, you have Meta’s Instagram, which has been contorting its rules in response to pressure from parent groups and politicians to make the app safer for teens. The interface now accuses users of looking for child-sex-abuse material if they search the phrase hot girls.
Into the breach steps a celebrity who says she is being prevented from posting about “????” or a journalist who claims that “the new TikTok algorithm has ZERO, and I mean absolutely ZERO news or politics content, not one word about anything going on at all, not even the weather.” In the replies to the latter post, others pushed back a bit, saying that they had actually seen plenty about the Minnesota protests and that the app was just performing weirdly in general. But the debate is unresolvable, because users have no objective way to assuage their own doubt or confirm their own fears. (I’ve also seen people start to question the visibility of political content they’ve posted on Instagram, despite the obvious fact that a ton of similar content has been highly visible there.)
Three years ago, after Musk acquired Twitter and before he turned it into X, he went through a phase of personally investigating users’ claims that they had been shadowbanned by the prior ownership. At the time, I wrote that this wouldn’t eliminate anxiety about the platform’s secret machinations. For that story, I spoke with Laura Savolainen of the University of Helsinki, then a doctoral student and now a postdoctoral researcher, about how hard it is to drag people away from the folk theories that they come up with about how the algorithm is treating them and why their content is or isn’t being seen as widely as they think it should be. “Algorithms are very conducive to folklore because the systems are so opaque,” she said then.
The reason to be paranoid about platform censorship is always the same—whether it’s happening or not, it could happen. When people feel especially reliant on social-media platforms not for stimuli, shopping, or slop, but for vital information and feelings of cohesion, support, and action, the possibility is never more real.