The Grievance Economy
Last month, Jack Dorsey, as CEO of Block, announced layoffs of nearly 40% of the workforce, citing AI as the reason. IBM, UPS, and Klarna had already done the same. Last week, President Trump ordered federal agencies to cease business with Anthropic after a dispute with the Pentagon, where the AI company insisted on restrictions preventing their products’ use in mass surveillance of Americans and autonomous weapon systems. Corporate leaders made unilateral decisions about AI deployment. A president used an executive directive to punish an AI company for insisting on restrictions. Neither decision was governed by law, regulation, or democratic process. They exposed a deeper problem: the institutions that might constrain such decisions are absent or unwilling to act.
When power over AI deployment is exercised without institutional guardrails, and millions face economic displacement from decisions they cannot appeal, the breeding ground for radicalization forms. Not violent extremism necessarily, but despondency and hopelessness transforming into the conviction that the system is rigged. That conviction becomes the justification for justice taken into one’s own hands. It manifests as withdrawal from civic participation, deepening distrust of institutions, and a generation disengaging from the very mechanisms that might address their grievance.
When someone loses a job, they lose more than income. They lose status, routine, and belonging all at once. Brain research shows that social loss travels through the same neural pathways as physical pain. The mind responds by craving connection more intensely and becoming more sensitive to signals of acceptance or rejection. Social media platforms bank on exactly that vulnerability. They deliver unpredictable social rewards: a like, a comment, recognition that arrives sometimes, but not always. That unpredictability is the most powerful kind of reinforcement there is. For someone who just lost their job and identity, these systems deliver exactly what the mind craves.
Mechanisms for Radicalization
Researchers from Stanford demonstrated the amplification mechanism. They could shift how much people distrusted those with different political views simply by changing what appeared in their feeds. This study demonstrated that if you reduce hostile content, hostility dropped. Increase it and hostility rose. The finding held regardless of political identity.
It can be debated whether algorithms cause radicalization or whether radicalized individuals gravitate toward validating communities. Research suggests both may operate. (see Radicalization on Reddit and Contemporary Dynamics on Social Media Polarization) For risk management purposes, the distinction does not matter. Either way, the outcome is the same: grievance becomes collective, validated, actionable.
Scholarship on radicalization identifies a consistent pattern. When four conditions converge, radicalization risk rises sharply. Extremist movements draw from populations experiencing structural inequities, isolation, or perceived marginalization, which provide fertile ground for grievance formation. Radicalizing ideologies rely on explanatory stories that frame opponents as corrupt or dangerous, elevate in-group identity, and promise meaning, moral clarity, or community. Radicalization accelerates when individuals come to perceive legal and accountability institutions as illegitimate, biased, or unjust. And modern digital ecosystems intensify radicalization by accelerating emotionally charged content, filtering out moderating perspectives, and reinforcing identity-threatening narratives. The United States is currently generating the first three at scale. AI is now industrializing the fourth.
A Preview
China offers a preview. The lying flat movement emerged in 2021 when young people concluded the system no longer worked. Youth unemployment hit a record 21.3% in 2023. By 2025, the movement had hardened into “rat people” (鼠人): young people spending 23 hours a day in bed, scrolling, refusing to participate. The government censored that image instead of addressing what produced it. AI-driven displacement is now deepening those conditions. Graduate job listings in China dropped 22% in the first half of 2025.
The United States is not China. Beijing can suppress its disengaged generation. We cannot. Our trajectory mirrors theirs. The World Economic Forum projects that 41% of employers globally intend to reduce workforces by 2030, driven primarily by AI. Youth unemployment among Americans aged 16 to 24 reached 10.8% in summer 2025, up from 9.8% the prior year. Among Black youth, 14.3%. These figures count only those still looking. Behind them is a growing population of young people who have stopped: stopped applying, stopped enrolling, and stopped engaging with institutions that no longer respond.
What distinguishes today from 1982 is not the grievance itself. A steelworker in Pittsburgh in 1982 experienced the same exclusion, institutional abandonment, and hunger for belonging. What is new is the algorithm. Today it finds him before his family knows he lost the job. It exposes his vulnerability, amplifies his despair, and connects him to narratives that name who is responsible and to communities that share his grievance. What once required slow organizing and face-to-face recruitment now happens at algorithmic speed. The community finds him before he even realizes he is part of a movement.
Extremist movements have always known where to find recruits. They draw from populations experiencing economic exclusion and the specific anger of feeling locked out by design. They offer what institutions stopped providing: a story identifying enemies who are corrupt and dangerous, a group that shares the grievance, a sense of purpose built around that wound. That narrative is the bridge between individual loss and collective action. They grow wherever institutions have failed to respond. The IMF estimates that 60% of jobs in advanced economies are already exposed to AI disruption, with roughly half facing reduced demand or elimination. That is the population primed to enter these exact conditions.
Malign state actors have already recognized this. Russia and China, according to U.S. intelligence assessments and academic investigators, are using AI to generate content aimed at exactly these populations. The cost of producing that content has dropped to near zero. The cost of rebuilding the trust it erodes has not.
A 2024 analysis in the Harvard Law School Forum on Corporate Governance found no law requiring boards to consider downstream consequences of AI-driven workforce decisions. Severance packages address only the economic dimension. When institutions demonstrably fail to address the conditions producing grievance, they signal to displaced workers that institutions either cannot or will not help. That signal erodes institutional legitimacy, while accelerating the very conditions that enable radicalization.
The question this moment demands is not whether AI belongs in the American economy. It does, and the debate framed that way is already over. The competitive pressure is real, and in-fact, AI may intensify the cognitive load and expectation of work performance and productivity. Placing additional pressure on both workers and decision makers. The productive question, the one that still has a useful answer, is how much wisdom gets applied to the pace and terms of its arrival, and whether that wisdom extends to the generation inheriting the consequences.
In January 2025, 45,000 dockworkers settled a contract requiring union approval before automated equipment could be procured at ports. They did not stop automation. They negotiated its tempo. That outcome required leverage most workers do not have. But business leaders and policymakers shaping AI deployment do have that leverage. The window for using that leverage narrows with every major layoff, with every red-line crossed, with every anti-human decision being made.
The conditions for crisis are already assembling in the unemployment data, in the engagement metrics, in the quiet exit of a generation from institutions too occupied to notice or too indifferent to respond. The long-term societal damage from this will not announce itself loudly. It will arrive slowly, subtly, and then suddenly and explosively, in forms preventable only if addressed now. The choice is not whether to act. It is how much pain we will inflict upon ourselves before we act.
So What Do We Do?
Three interventions follow directly from the evidence this piece presents. First, the dockworker model is a template: sectoral frameworks that extend automation-approval provisions to industries without existing union infrastructure would give workers outside organized labor some procedural check on displacement decisions.
Second, the Stanford feed research points toward a design obligation, namely platform architecture that reduces hostile content reduces hostility, which means feed composition is a policy lever, not an immutable feature of engagement optimization.
Third, the governance gap the Harvard Law Forum identified has a straightforward remedy: board-level disclosure requirements for AI-driven workforce decisions, modeled on existing material risk reporting, that force downstream consequences into the same deliberative process as the deployment decision itself.
None of these stop AI adoption. All three slow the specific mechanism this piece describes the conversion of unaddressed displacement into algorithmically amplified grievance before it compounds further. The question remains; will we sacrifice our future generations at the altar of competitive edge? Or will we approach this new technology with wisdom and discernment?
The post The Grievance Economy appeared first on Small Wars Journal by Arizona State University.