Governing Cognitive Warfare at Ecosystem Speed: Why America Can’t Organize for Influence—and What It Takes to Compete
In the time it takes Washington to schedule an interagency meeting, an adversary can frame an incident for half the world. That is the central problem of cognitive warfare. Meaning now hardens into public and elite “reality” at a speed our institutions were never designed to match.
The United States does not lack tools, talent, or awareness in this space. It has world‑class intelligence agencies, public diplomacy professionals, military information forces, and technology partners. What it lacks is governance that can align those assets quickly enough to matter, without breaking faith with democratic norms. Until the United States builds a way to govern this fight at “ecosystem speed,” it will keep losing contests of perception even as it wins the resourcing debate.
For more than a century, Washington has tried to solve this problem with new labels and new offices. It has experimented with “psychological warfare,” “political warfare,” “information operations,” “strategic communication,” “information warfare,” and now “cognitive warfare,” each time promising that this rebrand would finally catch up to how adversaries use information. The results have been the same: impressive capabilities on paper, uneven performance in practice, and a persistent gap between what senior leaders say they want and what the system can actually deliver.
The core argument of this article is simple: The United States will continue to fall behind unless it builds a national framework that treats cognitive warfare as a governance challenge, not a messaging or capability problem. That framework has to live above any single department, sit close enough to the president to matter, and still be constrained enough to preserve legitimacy. In practice, that means centering responsibility in the National Security Council and creating a thin, permanent integrative body that can synchronize a broad ecosystem of public, private, and allied actors at the speed at which narratives now move.
Structural Gravity, Not Just Bad Choices
Every serious effort to organize psychological, informational, or cognitive activities in the United States has run into the same structural forces. They are less like individual obstacles and more like gravity fields: persistent, predictable, and impossible to wish away.
The first is volatile presidential attention. Under President Truman, the Psychological Strategy Board was created to coordinate national psychological strategy at the highest level. Its authority derived almost entirely from White House backing, and when that attention waned, the board quickly became irrelevant. President Eisenhower understood the fragility of a body that lived on presidential interest alone, so he replaced Truman’s Board with the Operations Coordinating Board, tying psychological and political warfare to policy execution rather than standalone messaging. Eisenhower’s Coordinating Board was effective because it sat close to the president and operated continuously, not because it possessed magical staff work. Yet it too hollowed out once senior focus shifted. The lesson is blunt: Presidential ownership can provide momentum, but executive attention is a variable, not a foundation. Any model that assumes sustained top-cover as its primary source of authority will fail the moment the next crisis takes center stage.
The second gravity field is partisan contestation and legitimacy risk. Influence and information activities intersect directly with media, public discourse, and democratic norms. They are inevitably accused of propaganda, censorship, or domestic manipulation, whether or not those accusations are fair. The history of the U.S. Information Agency illustrates the point. For decades, the Information Agency provided a standing capability to engage foreign audiences and contest adversarial narratives. Its effectiveness was real, but so was the political sensitivity that surrounded it. After the Cold War, concerns about credibility, domestic spillover, and the optics of centralized “information management” eroded support. The Information Agency was dismantled not because its mission was irrelevant, but because its legitimacy proved fragile when the geopolitical narrative changed. In cognitive warfare, legitimacy is not an external constraint to be endured. It is part of the terrain.
The third gravity field is fragmented authority and interagency competition. Responsibility for influence and information has never resided cleanly in one department. Truman’s Strategy Board lacked authority over execution. Eisenhower’s Coordination Board could compel coordination, but only while backed firmly by the president. The Information Agency possessed execution capacity but remained structurally separate from core policy and military planning, limiting integration. Across these cases, the same tension recurs: Coordination without authority produces delay and dilution; authority without legitimacy produces resistance and backlash. Interagency friction is not a failure of personality or will. It is baked into the American model of separated powers and overlapping jurisdictions.
Together, these three fields explain why U.S. efforts in this space oscillate between over-centralization and fragmentation, between bursts of activity and long periods of drift. Any governance model for cognitive warfare that ignores them will either be paralyzed by politics or implode once attention shifts.
Six Non‑Negotiable Principles
If past failures are rooted in structure more than intent, better slogans will not fix them. Governance has to be engineered deliberately. A century of fitful experience points to six principles that are less aspirational wish‑lists than minimum design requirements.
First, cognitive considerations must be integrated with policy, not bolted on. Eisenhower’s Coordination Board insisted that information, diplomacy, and covert action be planned together as parts of policy execution, not as downstream messaging. That remains the core insight. Meaning is shaped by action as much as by words. When influence is treated as a late-stage communications problem, adversaries exploit the gap between what Washington intends and how others interpret it. Integration does not mean every policy becomes a persuasion campaign. It means asking, before decisions are made, how those decisions will be perceived, mischaracterized, and weaponized.
Second, governance needs real authority, not just coordination. The Psychological Strategy Board could recommend but not compel, and departments could veto action through delay. The Operational Coordination Board’s relative success came from embedding coordination inside policy execution, backed by the ability to adjudicate disputes. In today’s environment, narrative cycles move much faster than Cold War bureaucracies. A body that relies on voluntary cooperation will always arrive too late. Some entity must be empowered to set priorities, resolve conflicts, and require synchronization across departments within their legal mandates.
Third, leadership has to be sustained and empowered. The Information Agency’s professional cadre showed the value of people dedicated full‑time to influence and information, with specialized skills and institutional memory. It also showed the danger of isolating them from core policy and security decision-making. Episodic “czars” and ad hoc task forces tend to drift toward risk avoidance and procedural box-checking once the initial urgency fades. Effective governance demands a continuous locus of responsibility with access to senior decision-makers and insulation from constant reorganization.
Fourth, any serious model must account for a structured public–private partnership. Unlike the Cold War, today’s cognitive terrain is largely privately owned. Platforms, media ecosystems, and civil society actors shape narrative velocity and reach in ways the government cannot replicate or control. Historical models that assumed state dominance over information channels no longer apply. Effective governance, therefore, hinges less on content control than on shared situational awareness, resilience, and mechanisms for rapid, lawful collaboration when hostile campaigns threaten democratic processes or national security.
Fifth, there must be a clear mandate, boundaries, and oversight. Ambiguity around what is authorized and what is off limits has repeatedly produced internal hesitation and external backlash. Over time, fear of misuse became as paralyzing as misuse itself. Governance must spell out, in statute and policy, which activities are permitted, which are prohibited, and how oversight is carried out. Clear lanes do not slow action; they enable it by reducing uncertainty and risk aversion.
Sixth, the enterprise must be professionalized, educated, and evaluated. The most durable U.S. efforts in this space invested in people and standards, not just wiring diagrams. The Cold War’s Active Measures Working Group, for example, combined analytic rigor, interagency continuity, and disciplined assessment to expose Soviet disinformation with speed and credibility that ad hoc efforts rarely matched. Modern cognitive governance should build similar professional pathways, shared doctrine, and feedback mechanisms that link behavior to strategic outcomes over time. It should also treat listening—systematic sensing of narratives and grievances—as a national function, not a side-task assigned to whichever office has spare bandwidth.
These six principles do not dictate a single organizational chart. They do, however, sharply narrow the range of models that can plausibly work.
Why No Department Can Own This Fight
Measured against those principles, the idea of assigning “ownership” of cognitive warfare to any single department fails almost immediately under scrutiny.
The Department of War cannot serve as the controlling authority without effectively militarizing the information environment and triggering legal and political friction that slows action past the point of relevance. State is structurally biased toward deliberation, signaling, and consensus-building, which are essential but ill-suited to rapid adaptation in a fluid narrative fight. The Intelligence Community excels at sensing and assessment but is poorly positioned to orchestrate overt, multi‑actor responses at scale. The Department of Homeland Security narrows the problem to domestic protection and resilience, leaving foreign campaigns and gray‑zone activity at the margins.
Each of these institutions is indispensable to execution. None is structurally suited to enterprise‑level coherence across foreign and domestic exposure, public and private actors, and strategic and operational timelines. Giving any one of them formal “ownership” would simply harden the seams that adversaries already exploit. Governance has to live above departmental cultures while empowering them, not subordinating everything to one of them.
A National Security Council‑Anchored Ecosystem Model
Cognitive warfare is not uniquely strategic in the sense that only heads of state should care about it. It is uniquely cross‑cutting. It touches everything from defense posture and sanctions to tech regulation and public health. That makes alignment of intent, authorities, and constraints across departments the core challenge. Only the Executive Office of the President has the mandate and convening authority to impose that alignment without collapsing the fight into one department’s worldview.
Within the Executive Office, the National Security Council system already exists to integrate policy development and implementation across the executive branch. The Council’s “ownership” of cognitive warfare need not imply day‑to‑day operational control. It means accepting responsibility for coherence: setting priorities, adjudicating trade‑offs, aligning authorities, and ensuring that action stays within legitimacy boundaries. In practice, that responsibility can be exercised through the Council’s existing processes and presidential directives, not a sprawling new command. Precedent already exists. Counterterrorism, cybersecurity, and pandemic response all drifted toward executive‑level governance when departmental approaches proved insufficient.
But National Security Council leadership alone is not enough. Between council‑level decision-making and departmental execution sits a missing layer: enterprise integration at speed. What the United States needs is not another large bureaucracy, but a thin core that enables a thick ecosystem.
Centralized Oversight and Decentralized Execution
A statutorily chartered National Cognitive Security Center, anchored to the National Security Council, could fill that role. Its purpose would not be to run influence operations. Instead, it would do three things.
First, it would establish and maintain a shared measurement layer—what some call “narrative intelligence.” Rather than treating narratives as anecdotes or social‑media snapshots, this layer would track how stories form, spread, and harden into behavior across audiences and platforms. The goal is not to count posts but to understand which frames are actually shaping decisions, and where the United States is structurally blind.
Second, it would translate presidential priorities into actionable guidance linked to indicator‑based triggers. That means pre‑agreed thresholds at which departments know they are expected to act, using tools that have already been vetted for legality and legitimacy. Deliberation moves upstream: Instead of arguing about authorities and risk while a campaign is unfolding, those arguments happen beforehand. When indicators flash, the system executes options that have already been cleared.
Third, the center would enable rapid interagency synchronization without centralizing execution. Departments would retain responsibility for operations inside their authorities: State for diplomacy and public diplomacy, Defense for military information activities, the Intelligence Community for collection and certain covert action, Homeland Security, and others for domestic resilience. The center’s distinguishing function would be to adjudicate interagency trade‑offs, manage seams between foreign and domestic exposure, and enforce timely alignment when narratives and perceptions are moving faster than normal policy cycles.
In this model, authority resides at the center, but action remains distributed and federated. Shared awareness at the core allows decentralized execution at the edge without chaos. Speed comes not from cutting corners but from doing the hard thinking before crises, so that when it matters, the system is not improvising under fire.
Guardrails that Accelerate, Not Paralyze
Any proposal to align cognitive tools under the Executive Office of the President raises immediate and legitimate concerns. If mis-designed, such a system could blur legal lines between foreign and domestic audiences, threaten civil liberties, or erode public trust. The most recent National Defense Authorization Act language on narrative intelligence and cognitive warfare reflects this anxiety, pairing calls for improved capabilities with explicit warnings about civil liberties and democratic processes.
Those concerns should be taken as design constraints, not conversation‑stoppers. Effective oversight is not a burden to be minimized. In cognitive warfare, it is a strategic asset. Clear statutory boundaries and robust oversight mechanisms make it easier—not harder—for practitioners to act.
At a minimum, a National Cognitive Security Center should be chartered with explicit prohibitions against targeting domestic political speech or lawful dissent, and with bright lines governing data access, retention, and use. Congressional reporting requirements and independent audits should be built into its operating concept from the beginning, not added after the first scandal. Where security permits, transparency about mission, authorities, and safeguards should be the default, not the concession.
Done well, these guardrails can speed decision-making. Operators who know in advance what is clearly off limits are less likely to freeze in ambiguity or push borderline options up endless legal and political decision chains. Instead of improvising boundaries in the middle of a campaign, officials act inside a framework that has already been debated and legitimized. Ambiguity, not oversight, is what slows cognitive governance today.
Why This Matters for the Army’s Information Warfare Transformation
Although the governance problem is national, its consequences are most visible in the forces asked to execute campaigns with incomplete tools and guidance. The Army is the largest test case.
Recent guidance treats competition below armed conflict and sustained campaigning as central features of modern war. The Army’s Continuous Transformation effort, including the establishment of Information Warfare as a distinct organizing concept that consolidates psychological operations and information operations under a single proponent, is a serious attempt to respond. It signals recognition that influence, information, and cognitive effects are operational concerns, not just tactical support functions.
The challenge is that capacity, alignment, and access have not yet caught up to that demand. In a no‑growth environment, Information Warfare forces remain limited, heavily weighted toward tactical roles, and structurally separated from the decision cycles where national strategy assumes information effects will be integrated at operational and strategic levels. Much of the Army’s Influence capability remains concentrated in the U.S. Special Operations Command, a legacy of an era when policy treated influence as niche or episodic rather than enduring. Non‑Special Operations formations are now expected to execute campaigns that require persistent Information Warfare support without guaranteed access to forces designed for that scale and tempo.
Without a complete transformation that actually addresses these structural challenges, Army information warfare will default to the familiar pattern: coordination without control and activity without strategic effect. Units will continue to innovate locally, often impressively, but their efforts will be episodic and hard to scale. The answer is not a new label or doubling down on existing concepts, but for the Army to apply these same principles to design its own governance model for information and influence. Doing so would give the service a coherent frame for how it organizes, where it plugs into the Joint Force and a broader whole‑of‑society approach, what demand signal to expect, and how its contributions are evaluated over time.
Coherence at the Maximum Speed Democracy Allows
The United States does not lack awareness, tools, or talent in cognitive warfare – it lacks institutions that can act coherently at the speed the environment demands while remaining legitimate. Past reforms failed because they treated governance as an afterthought or assumed away structural realities.
A viable model has to operate on two timelines at once: responding fast enough to matter today while continuously learning and adapting for tomorrow. At the national level, that points toward National Security Council‑centered governance with a thin integrative core that enables a broad execution ecosystem rather than competing with it. Within the Department of War and the Army, the same principles can guide ongoing transformation so that Information Warfare becomes a model for scaling influence capabilities across the Joint Force, not another cautionary tale.
This is not an argument for centralized control of information. It is an argument for aligning authority, legitimacy, and action before adversaries define reality. The choice is not between speed and restraint, or between effectiveness and democratic values. It is whether the United States will design institutions capable of achieving coherence at the maximum speed democracy allows. In cognitive warfare, delay is not neutral. It is terminal.
The post Governing Cognitive Warfare at Ecosystem Speed: Why America Can’t Organize for Influence—and What It Takes to Compete appeared first on Small Wars Journal by Arizona State University.