Battlefield AI lessons learned
The past month delivered something rare in national security: real-world evidence of how artificial intelligence performs in conflict. Not in theory, not in demonstrations, and not in Silicon Valley pitch decks – but in the messy, high-stakes environment of modern warfare. What works in a demo doesn’t always survive first contact with reality. Some of those lessons are encouraging. Others should give us pause.
Let’s start with Anthropic. The company raised serious and necessary questions about how its technology should be used, particularly around mass surveillance of American citizens and fully autonomous lethal systems. Those concerns matter. They surfaced directly in negotiations with the Pentagon, where Anthropic resisted certain applications of its A.I. But the way those concerns were handled became a case study in how not to engage on national security. Rather than working through established channels with defense stakeholders, Anthropic tried to impose constraints mid-negotiations. That didn’t clarify the issues, it escalated them. The result was a breakdown in trust so severe that the U.S. government labeled the company an “unacceptable risk” in wartime.
That outcome should concern everyone. Not because the ethical questions were controversial, but because the process for aligning on transformative technologies broke down. When companies and the government fail to coordinate on how critical systems are used, the result is fragmentation at the moment cohesion matters most. As one OpenAI employee told me, “I’m happy to push out Anthropic – but I didn’t want to win like this.”
Then there is Maven Smart Systems, the military’s leading AI-enabled command and control program of record, now taking a well-earned victory lap after recent operations in Iran. The Maven platform has performed well in environments where connectivity is stable and data flows freely. In Iran, operations occurred in largely uncontested network conditions, not subjected to sustained denial of communications, degraded networks, or limited bandwidth. While Maven and similar technologies offer tremendous advantages in this type of theater, complications might arise in more austere environments where adversaries can deploy sophisticated signals interference and electronic warfare. That’s where an edge-first approach can make a meaningful difference.
Future conflicts – particularly against near-peer adversaries – will not offer the luxury of persistent, high-bandwidth connectivity. GPS and satellites will be targeted. Networks will be jammed. Undersea internet cables may be cut. Links to centralized cloud systems will fail. In those conditions, cloud-first AI risks becoming a spectator rather than a participant. Thankfully, the AI industry has already learned how to overcome this challenge through years of experimentation with self-driving cars and autonomous robotics.
Across the robotics industry, there is a growing recognition that intelligence must move closer to where data is generated. As Bessemer Venture Partners noted in a recent roadmap, the rise of “physical AI” reflects a shift away from centralized processing toward systems that operate directly in the real world, powered by advances in edge computing. This is not theoretical, it is driven by constraint. Autonomous systems generate. Every roboticist understands these limitations. You don’t build a self-driving system that depends on constant cloud access. You build it to think locally, act independently, and keep working when disconnected.
The battlefield is no different – if anything, it is more extreme. Modern military operations generate vast amounts of data from drones, sensors, vehicles, and individual warfighters. Trying to push all that information back to a centralized cloud is not just inefficient, it is impossible under contested conditions. The future of battlefield AI is edge-first. That means deploying machine-learning models directly on devices. It means enabling real-time decision-making without relying on distant infrastructure. And it means designing systems that degrade gracefully, rather than failing completely when connectivity is lost. These are not abstract design choices. These are new battlefield requirements.
To contrast the recent events regarding Anthropic and Maven Smart Systems, the central lesson is that AI is a powerful tool that must be used in the right time and place for it to be most effective. The past few months have revealed that AI can be fiercely misunderstood, but also a simple new tool on the battlefield. Like all other tools in dangerous environments, it can be misused or poorly designed for the needs of frontline practitioners. The United States has the talent and the technology to lead in this space, but only if we learn the right lessons and apply them amid current conflicts and before the next conflict begins.
Ian Kalin is the CEO and co-founder of TurbineOne and the former Chief Data Officer of the U.S. Department of Commerce, with a background spanning military service, government innovation, and startup leadership.