Anthropic said no. OpenAI said yes. One weekend, one decision—and a masterclass in brand building
Here is a number worth sitting with: 295%.
That’s how much U.S. app uninstalls of ChatGPT surged in a single day last month, after OpenAI struck a deal with the Department of Defense that its rival Anthropic had publicly refused to sign. In the same 24-hour window, Claude’s downloads jumped 51%. By that evening, Anthropic’s app had climbed to No. 1 on the U.S. App Store, leapfrogging 20 apps in under a week.
One values-driven decision. One weekend. A measurable transfer of market share.
Most of the coverage framed this as a political story. It isn’t. Or at least, not only. It’s also a brand loyalty story. And it tells us something important about the category war that’s actually being fought in AI, one that has very little to do with compute power.
The Switching Cost Nobody Is Naming
Brand strategists understand switching costs intuitively. In banking, insurance, enterprise software—anywhere the friction is high—emotional and values-based factors end up doing as much heavy lifting as product performance. The category with the highest rational switching cost often becomes the category where trust matters most.
AI is moving toward that same dynamic, faster than most people are ready for.
An AI platform doesn’t just perform tasks. It accumulates context. It gets to know us—how we think, our shorthand, our working rhythms. For enterprise users in particular, this depth compounds quickly. The longer a business embeds an AI platform into its workflows, the higher the exit cost becomes, not just technically, but cognitively, culturally, and even emotionally.
There’s a name for this: the relational cost. It’s the switching cost nobody in the AI conversation is actually naming. And in any high-switching-cost category, the ‘brand’ question—what does this company stand for, and do I trust it—eventually becomes the definitive one.
Operationalizing Values Is Not the Same as Talking About Them
The consumer response to the DoD news didn’t come out of nowhere. It was the visible payoff of a positioning strategy years in the making.
Anthropic has been making a consistent, operationalized argument about what kind of company it is—and backing it with choices that have visible cost. The Claude Constitution is a publicly available, inspectable training framework. Not a mission statement—a framework. Anthropic’s Economic Index analyses AI adoption across sectors and positions the company as a participant in the difficult societal conversation about AI’s impact on employment, not just a product vendor. These are category-shaping moves, not PR.
The market had been registering these signals quietly, long before last month. Independent analyses suggest Claude holds 32% of enterprise AI usage, significantly disproportionate to its 3.5% consumer footprint. Enterprises—more deliberate, more risk-averse, more consequentially exposed to AI failure—have already been choosing Claude at scale. That gap between enterprise and consumer adoption isn’t a coincidence. It’s a trust premium.
The Cost of Caring
It’s easy to have values when they cost you nothing. For Anthropic, these came with a $200 million price tag.
That’s the suggested value of this contentious Pentagon contract. Furthermore, the supply-chain risk designation—a label the Trump administration has now formally applied, and which Anthropic is challenging in court—threatens hundreds of millions more across broader government contracts. This damaging designation, historically reserved for foreign adversaries like Huawei, has never before been applied to an American company.
That is a real commercial cost, not a hypothetical one. But what looks like a ceiling from one angle looks like a moat from another.
In the weeks since the dispute went public, Anthropic’s revenue run rate has nearly doubled—from $9 billion at the end of 2025 to almost $20 billion today, according to Bloomberg. The government closed a door. The market opened several more.
That is not a coincidence. That is what trust, operationalized and defended under pressure, looks like as a growth strategy.
So What Does This Mean for Your Business?
The question that should be on the table in every leadership meeting right now: which AI platforms are you building on, and have you thought seriously about what that association means for your brand?
AI platforms are no longer neutral infrastructure. They carry values, make visible choices, take public positions. The AI your business relies on is becoming part of your brand. When a platform’s ethics come into question—as they periodically and inevitably will—that exposure travels upstream to every company in its orbit.
This creates both a risk conversation and a strategic opportunity. Evaluating AI partners on trust and values criteria, not just capability benchmarks, is the kind of decision that looks obvious in hindsight and prescient in the moment.
The Brand Codes Are Being Written Now
Early positioning in emerging categories hardens fast. The companies that define what a space stands for, not just what it does, shape expectations for years. We saw it with social media, with streaming, with fintech. In each case, the brands that defined the category’s values, not just its features, built loyalty advantages that capability alone couldn’t disrupt.
AI is at that moment. The conversation about what kind of category this is going to be is happening now, in public, in real time.
Stop asking which AI is most capable. Start asking which AI your business can afford to be associated with. Because our whirlwind romance with AI is fast turning into something more serious; committed, often exclusive, long-term relationships where platform loyalties get more embedded and more entrenched by the day.
Choose carefully. Credibility compounds faster than compute. The data is already proving it.