Fifth Circuit: You Have To Do A Ton Of Busywork To Show Texas’s Social Media Law Violates The First Amendment
If the government passes a law that infringes on the public’s free speech rights, how should one challenge the law?
As recent events have shown, the answer is more complex than many realized.
A few years ago, both Texas and Florida passed “social media content moderation” laws, which would both limit how social media platforms could engage in any kind of moderation, while simultaneously demanding they explain their editorial decision-making. The laws were then challenged as unconstitutional under the First Amendment.
While three out of the four lower courts (two district courts and one of the two appeals courts) that heard the challenges found it to be patently obvious that the laws were unconstitutional incursions on free speech, the Supreme Court took a different approach to the cases. The Supreme Court effectively punted on the issue, while giving some clues about how the First Amendment should apply.
Specifically, the Supreme Court sent the challenges of both laws back to the lower courts, saying that since both challenges — brought by the trade groups NetChoice and CCIA — were presented as “facial challenges,” it required a different analysis than any of the lower courts had engaged in.
A “facial challenge” is one where the plaintiffs are saying, “yo, this entire law is clearly unconstitutional.” An alternative approach would be an “as applied challenge,” in which case you effectively have to wait until one of the states tried to use the law against a social media platform. Then you can respond and say “see? this violates my rights and therefore is unconstitutional!”
The Supreme Court said that if something is a facial challenge, then the courts must first do a convoluted analysis of every possible way the law could be applied to see if there are some parts of applications of the law that might be constitutional.
That said, the Supreme Court’s majority reason still took the Fifth Circuit to task, highlighting how totally blinkered and disconnected from the clear meaning and historical precedents its analysis of the First Amendment was. Over and over again, the Supreme Court dinged Texas’ law as pretty obviously unconstitutional. Here’s just one snippet of many:
They cannot prohibit private actors from expressing certain views. When Texas uses that language, it is to say what private actors cannot do: They cannot decide for themselves what views to convey. The innocent-sounding phrase does not redeem the prohibited goal. The reason Texas is regulating the content moderation policies that the major platforms use for their feeds is to change the speech that will be displayed there. Texas does not like the way those platforms are selecting and moderating content, and wants them to create a different expressive product, communicating different values and priorities. But under the First Amendment, that is a preference Texas may not impose.
Indeed, the Supreme Court noted that it can already see that the Fifth Circuit is on the wrong track, even as it was sending the case back over the procedural issues required for a facial challenge:
But there has been enough litigation already to know that the Fifth Circuit, if it stayed the course, would get wrong at least one significant input into the facial analysis. The parties treated Facebook’s News Feed and YouTube’s homepage as the heartland applications of the Texas law. At least on the current record, the editorial judgments influencing the content of those feeds are, contrary to the Fifth Circuit’s view, protected expressive activity. And Texas may not interfere with those judgments simply because it would prefer a different mix of messages. How that matters for the requisite facial analysis is for the Fifth Circuit to decide. But it should conduct that analysis in keeping with two First Amendment precepts. First, presenting a curated and “edited compilation of [third party] speech” is itself protected speech. Hurley, 515 U. S., at 570. And second, a State “cannot advance some points of view by burdening the expression of others.” PG&E, 475 U. S., at 20. To give government that power is to enable it to control the expression of ideas, promoting those it favors and suppressing those it does not. And that is what the First Amendment protects all of us from.
But, either way, the case has gone back to the Fifth Circuit, and it is now sending the case back to the lower court, with the instructions that the trade groups are going to have to argue every single point as to why the law should be considered unconstitutional.
As the Supreme Court recognized, it is impossible to apply that standard here because “the record is underdeveloped.” Id. at 2399. Who is covered by Texas House Bill 20 (“H.B. 20”)? For these actors, which activities are covered by H.B. 20? For these covered activities, how do the covered actors moderate content? And how much does requiring each covered actor to explain its content-moderation decisions burden its expression? Because these are fact-intensive questions that must be answered by the district court in the first instance after thorough discovery, we remand.
So, basically, get ready for a ridiculously long and involved process for challenging the law and takes a swipe at the district court in the process.
A proper First Amendment facial challenge proceeds in two steps. The “first step” is to determine every hypothetical application of the challenged law. Id. at 2398 (majority opinion). The second step is “to decide which of the law[’s] applications violate the First Amendment, and to measure them against the rest.” Ibid. If the “law’s unconstitutional applications substantially outweigh its constitutional ones,” then and only then is the law facially unconstitutional. Id. at 2397. “[T]he record” in this case “is underdeveloped” on both fronts. See id. at 2399; see also id. at 2410–11 (Barrett, J., concurring) (noting the record failed to “thoroughly expose[] the relevant facts about particular social-media platforms and functions”); id. at 2411 (Jackson, J., concurring in part and concurring in the judgment) (noting plaintiffs failed to show “how the regulated activities actually function”); id. at 2412 (Thomas, J., concurring in the judgment) (noting plaintiffs “failed to provide many of the basic facts necessary to evaluate their challenges to H.B. 20”); id. at 2422 (Alito, J., concurring in the judgment) (noting the “incompleteness of this record”). That is a consequence of how this case was litigated in district court
There is plenty of busywork for all involved:
There is serious need of factual development at the second step of the analysis as well. To determine if any given application of H.B. 20’s “content-moderation provisions” is unconstitutional, the district court must determine “whether there is an intrusion on protected editorial discretion.” Id. at 2398 (citation omitted). That requires a detailed understanding of how each covered actor moderates content on each covered platform. See id. at 2437 (Alito, J., concurring in the judgment) (“Without more information about how regulated platforms moderate content, it is not possible to determine whether these laws lack a plainly legitimate sweep.” (quotation omitted)). Focusing primarily on Facebook’s News Feed or YouTube’s homepage will not suffice, as “[c]urating a feed and transmitting direct messages,” for example, likely “involve different levels of editorial choice, so that the one creates an expressive product and the other does not.” Id. at 2398 (majority opinion).
Moreover, one of the principal factual deficiencies in the current record, according to the Supreme Court, concerns the algorithms used by plaintiffs’ members. See, e.g., id. at 2404 n.5; id. at 2410–11 (Barrett, J., concurring); id. at 2424, 2427, 2436–38 (Alito, J., concurring in the judgment). It matters, for example, if an algorithm “respond[s] solely to how users act online,” or if the algorithm incorporates “a wealth of user-agnostic judgments” about the kinds of speech it wants to promote. Id. at 2404 n.5 (majority opinion); see also id. at 2410 (Barrett, J., concurring). And this is only one example of how the “precise technical nature of the computer files at issue” in each covered platform’s algorithm might change the constitutional analysis. ROA.539 (quotation omitted). It also bears emphasizing that the same covered actor might use a different algorithm (or use the same algorithm differently) on different covered services. For example, it might be true that X is a covered actor and that both its “For You” feed and its “Following” feed are covered services. But it might also be true that X moderates content differently or that its algorithms otherwise operate differently across those two feeds. That is why the district court must carefully consider how each covered actor moderates content on each covered service.
Separately, there’s the question about the transparency and explanatory parts of the law. Incredibly, the ruling says that the lower court has to explore whether or not being required to explain your editorial decisions is a First Amendment-violating burden:
When performing the second step of the analysis, the district court must separately consider H.B. 20’s individualized-explanation provisions. As the Supreme Court has instructed, that requires “asking, again as to each thing covered, whether the required disclosures unduly burden expression.” Moody, 144 S. Ct. at 2398 (majority opinion). The first issue to address here is the same one addressed above: whether each covered actor on each covered platform is even engaging in expressive activity at all when it makes content-moderation decisions. See id. at 2399 n.3 (explaining that these provisions “violate the First Amendment” only “if they unduly burden expressive activity” (emphasis added)). Then for each covered platform engaging in expressive activity, _the district court must assess how much the requirement to explain that platform’s content-moderation decisions burdens the actor’s expressio_n.
The one interesting tidbit here is the role that ExTwitter plays in all of this. Already, the company has shown that while it is grudgingly complying with the EU DSA’s requirements to report all moderation activity, it’s not doing so happily. Given the nature of the Fifth Circuit (and this panel of judges in particular), it would certainly be interesting to have Elon actually highlight how burdensome the law is on his platform.
Remember, the law at issue, HB 20, was passed under the (false) belief that “big social media companies” were unfairly moderating to silence conservatives. The entire point of the law was to force such companies to host conservative speech (including extremist, pro-Nazi speech). The “explanations” portion of the law was basically to force the companies to reveal any time they took actions against such speech so that people could complain.
But now that ExTwitter is controlled by a friend — though one who is frequently complaining about excessive government regulation — it would be quite interesting if he gets dragged into this lawsuit and participates by explaining just how problematic the law is in a way that even Judge Andrew Oldham (who seems happy to rule whichever way makes Donald Trump happiest) might even realize that the law is bad.
Either way, for now, as the case goes back to the district court, NetChoice and CCIA will have an awful lot of work to do, for two groups that are already incredibly overburdened in trying to protect the open internet.