Why Robots Observe, But Humans Still Decide
Today, Boston Dynamics’ robot dogs—which cost up to $300,000 each—are already patrolling data centers across the United States, guarding the infrastructure powering Big Tech’s generative A.I. The companies building the world’s most powerful artificial intelligence are entrusting its protection to robots. A.I. is, in effect, guarded by robots that themselves run on A.I.
There is an important detail in this arrangement: these robots do not make a single decision on their own. Their role is strictly observational: monitoring, patrolling and detecting anomalies. They do not execute force or autonomous action. Any response in to a perceived threat remains firmly in human hands.
Robots that do not make independent decisions reflect a conscious choice by developers and operators worldwide. This constraint is not the decision of a single company or jurisdiction. It is a shared principle that has emerged independently across regions—from Dubai to New York, across Chinese cities and American data centers. Wherever these systems are deployed, the same boundary holds: robots observe, humans decide.
The infrastructure boom and new demand for surveillance robots
American technology companies are investing hundreds of billions of dollars in new data centers. As infrastructure expands, demand for autonomous security systems is rising in parallel. The global security robotics market is projected to reach $19.18 billion in 2026 and to double to $45.31 billion by 2033, representing an average annual growth rate of 13.1 percent.
The rationale is straightforward. Securing modern infrastructure with human personnel alone is increasingly inefficient: the volume of repetitive monitoring tasks is too high, and the demand for continuous oversight is too great. Robots, on the other hand, can handle both patrols and industrial inspections, including in environments that are difficult or hazardous for humans.
At the same time, deployment of autonomous systems in public spaces has been growing over the past few years. In October 2025, Dubai Police introduced an autonomous patrol robot at the Global Village entertainment complex. The robot moves independently through the crowds, captures 360-degree video and transmits it to a control center.
In January, a traffic-directing robot made its debut at a busy intersection in Wuhu, China. In the U.K., Nottinghamshire police are testing a robotic dog in armed sieges and hostage scenarios, where the robot enters first to assess the conditions while all decisions regarding force remain with officers.
Across these different countries and systems, one principle remains constant: robots monitor, people decide.
Why does the boundary fall here?
At first glance, the idea that robots observe while humans act seems to contradict the public narrative put forth by many A.I. companies, which have long suggested a path toward fully autonomous systems. In practice, however, current technology does not support that transition without introducing unacceptable risk.
The language models that underpin most modern intelligent systems do not have a grounded understanding of the physical world. They do not understand reality in the human sense. They operate on tokens and probabilistic patterns derived from large datasets. It’s clear that these systems excel at performing tasks that exist entirely within structured code. But when confronted with unpredictable real-world complexity, text isn’t enough. In these situations, models can “hallucinate,” confidently producing outputs that are plausible but incorrect. In a chatbot interface, this is manageable as long as outputs are reviewed. In infrastructure management or public safety, the consequences are far more serious.
Consider a police robot patrolling the streets at 3 a.m. that encounters situations that cannot be predicted in advance based on training data. A person is lying on the sidewalk—is this an assault victim, an unconscious individual, or someone intoxicated? On the surface, the visual signal may be identical, but the required response differs entirely. Or take someone aggressively attempting to open a car door—are they stealing the vehicle or just trying to access their own?
Misinterpretation in these contexts can escalate into conflict, civil rights violations or reputational crises for cities and operators.
The self-driving car threshold as an analogy
A useful analogy for what we are actually striving for comes from self-driving cars. It wasn’t enough for industry pioneers like Waymo to show the public that their systems were, on average, no worse than human drivers. Regulators demanded demonstrable statistical superiority: fewer accidents and fewer incidents over equivalent distances.
This threshold remains a subject of debate, but the principle is clear: the greater the potential harm from an error, the higher the bar for proven safety must be. This is especially true for a robot that may one day be granted the right to use force. If armed police robotic systems are ever introduced, they will have to demonstrate not just reliability comparable to that of a human officer, but a multiple-fold superiority across all key metrics in real-world, not laboratory, conditions.
For now, we are still a long way from that threshold. Modern robotic police officers are deliberately unarmed and function primarily as replacements for patrol cars.
Responsible autonomy as the modern norm
The private sector has already learned the hard way that overestimating A.I.’s ability to make decisions comes at a high cost. In 2023, Swedish fintech company Klarna laid off approximately 700 employees after introducing an A.I. chatbot intended to handle their work, only to quietly rehire them two years later.
Across industries, companies that performed well in controlled demonstrations have encountered dirty data, non-standard requests and hidden operational costs. Many continue to lose an estimated 40 percent of expected productivity due to manual correction of A.I. generated errors.
As long as A.I. lacks a grounded model of reality and hallucinations remain a systemic feature rather than an exception, critical decisions must remain under human control. Robots guarding data centers, autonomous patrols in Dubai and mechanical police dogs should not give the impression that we are moving toward a world where machines make decisions instead of people. They signal something more pragmatic: a functional division of labor between humans and machines.
The authority to make final judgments must rest with accountable individuals. Only under that condition can progress remain stable and sustainable.