Killer Robots
I read two comments this week on a forum populated by some of the smartest tech execs I know. Both were polished, confident, and on their surface entirely reasonable. Both disturbed me deeply, not because the people writing them were incompetent or malicious, but because they were neither, and yet both comments pointed toward a conclusion that I believe is genuinely catastrophic. Understanding exactly why requires sitting with the arguments seriously rather than dismissing them, which is what I intend to do here.
The first comment went something like this: where exactly in the sequence of events must a human make the life or death call? An AI agent can make that decision far faster and closer to the target than any human possibly could. And haven’t we already seen this movie? We took control of automobiles away from drivers, and fatalities dropped precipitously. Who could possibly argue with that outcome?
The second comment was harder to brush aside because it came from a place of genuine intellectual honesty. The writer said he was glad Anthropic took a stand and glad they held their ground. But then he raised the argument that our adversaries, think China, will absolutely use this technology in exactly the ways Anthropic refuses to allow. If we do not match them, we are engaging on a global battlefield with both hands tied behind our back. He said he would not disparage a company with different views that was willing to give the Pentagon what it needs to fully protect the country. And then, almost in passing, he added something that I think he did not realize was the most important sentence in his entire comment: do I like the idea of these tools in the hands of people like Hegseth? Well, that is an entirely different soapbox.
That sentence is where I want to end up. But first, let me explain why both arguments, as reasonable as they sound, are wrong in ways that matter enormously.
This week, the Trump administration banned Anthropic from doing business with the federal government and designated the company a national security risk, the same label we apply to Chinese state entities, because Anthropic refused to remove two specific safeguards on its AI system Claude. One safeguard prevented Claude from being used in mass surveillance of American citizens. The other prevented Claude from powering autonomous weapons systems that make lethal decisions without a human in the loop. President Trump called the decision a disastrous mistake. Defense Secretary Pete Hegseth said America’s warfighters would never be held hostage by the ideological whims of Big Tech. And just like that, a company that said no to killer robots became an enemy of the state.
Start with the car analogy from the first comment, because it sounds so persuasive. Self-driving vehicles operate in a cooperative, rule-governed civilian environment. The road has lanes. Traffic lights follow predictable patterns. Pedestrians generally try not to die. The system being optimized is essentially a physics and probability problem, and when you remove human error from that problem, the numbers improve. This is real and meaningful progress.
War is the precise opposite of this environment. War is an adversarial system specifically designed by intelligent enemies to deceive, manipulate, and exploit exactly the kind of pattern recognition that AI systems use to make decisions. An enemy who knows you are deploying autonomous weapons will engineer scenarios to trigger them incorrectly. They will dress combatants as civilians and civilians as combatants. They will create electronic signatures that mimic valid targets. They will spoof sensor data. They will exploit the very speed advantage the first commenter praises, because speed without judgment is not an asset in adversarial conditions. It is a weapon handed to your enemy.
Military theorists have a framework for this called the OODA loop, which stands for Observe, Orient, Decide, and Act. It was developed by Air Force Colonel John Boyd after studying why American pilots outperformed their opponents even in inferior aircraft during the Korean War. The answer was not raw speed. It was the ability to cycle through observation, reorientation, decision, and action faster than the enemy could adapt. Boyd’s insight was that the most dangerous moment is the reorientation phase, where context, experience, moral judgment, and situational awareness are applied to raw data to produce understanding. That is where wars are won or lost. That is precisely the phase that autonomous systems eliminate. They compress the OODA loop by skipping the part that requires a human mind to determine whether what the sensor is showing is actually what it appears to be, whether the context warrants lethal force, and whether this particular act of violence is proportionate, necessary, and legally justified.
That last question is not optional. It is the foundation of the laws of armed conflict, which have governed warfare since the Geneva Conventions and have been refined through decades of military law, international treaty, and hard experience. The doctrine of distinction requires combatants to distinguish between civilians and fighters. The doctrine of proportionality requires that the anticipated civilian harm of an attack not be excessive relative to the concrete military advantage gained. The doctrine of precaution requires that commanders take all feasible measures to verify a target before engaging. These are not suggestions. They are binding legal obligations on every American soldier, enforceable through courts martial, international criminal tribunals, and the fundamental legitimacy of military force under international law.
Every single one of these doctrines requires a human being to make a judgment call in real time. Not a calculation. Not a probability assessment. A judgment, meaning a contextual, morally weighted, legally accountable decision made by a person who can be held responsible for it afterward. This is not an accident of legal tradition. It is the mechanism by which we distinguish war from massacre. When a soldier commits a war crime, we can court martial them. When a commander gives an unlawful order, they can be prosecuted. When a nation violates the laws of armed conflict, there is a framework, however imperfect, for accountability. When an autonomous system kills a wedding party because someone spoofed the targeting data, who goes to prison? The answer is no one, because no one made the decision. The machine did. A commander who deployed the system can claim it malfunctioned. The manufacturer can claim it performed as specified. The programmer can point to the training data. The chain of accountability dissolves into a hall of mirrors, and the dead stay dead with no one answerable for their deaths.
Now let me turn to the second comment, which is more serious and requires a more honest answer.
The China parity argument is real. I do not dismiss it. If Beijing deploys autonomous weapons at scale and we categorically refuse to, there are battlefield scenarios where that asymmetry costs American lives. That is not nothing, and anyone who waves it away without engaging is not being serious. The second commenter is raising a genuine tension, and he deserves a genuine response.
Here is where the argument breaks down. The question is not whether we should match China’s capabilities in general. It is whether removing the specific safeguards Anthropic maintained actually closes that gap in any meaningful way. China’s military advantage in autonomous systems, to the extent it exists, comes from decades of investment in hardware, sensor technology, satellite infrastructure, and manufacturing scale. The two things Anthropic refused to allow, mass surveillance of Americans and weapons that fire without human approval, do not address any of that. Removing those safeguards does not make the American military more capable of defeating the People’s Liberation Army. It makes the American government more capable of acting against its own population.
The second problem with the parity argument is that it proves too much. By its logic, because China surveils its citizens at extraordinary scale, we should too. Because authoritarian governments imprison political opponents without trial, a democracy facing those adversaries cannot afford the luxury of due process. Every erosion of constitutional protection can be justified by pointing to a worse actor. That path does not end with us winning against China. It ends with us becoming what we were fighting, at which point the question of who wins becomes considerably less interesting.
There is also a narrower military point worth making. Anthropic was not refusing to help the Pentagon. They were willing to work on surveillance of foreign targets, logistics optimization, communication security, threat assessment, offensive cyber capabilities, and dozens of other functions where AI genuinely improves military effectiveness without making the final kill decision. The Pentagon said no, we need the whole thing, no restrictions. That is not a response to a capability gap with China. That is an assertion of unchecked authority.
Which brings me back to the sentence that the second commenter threw in almost as an afterthought: do I like the idea of these tools in the hands of people like Hegseth?
I want to sit with that for a moment, because I think he may not have realized what he was saying. That sentence is a concession of the entire argument. It acknowledges that the danger of autonomous surveillance and autonomous weapons is not abstract or hypothetical. It is entirely dependent on who controls them and against whom they are directed. The second commenter, a thoughtful person making a good faith geopolitical argument, instinctively understood that the technology and the person holding it cannot be separated. You cannot evaluate the China parity question without also evaluating who in America gets to operate these systems, under what oversight, with what accountability, and what happens when that person decides the threat is domestic.
The answer to the commenter’s rhetorical question is that no, most Americans would not want these tools in Hegseth’s hands. And that instinct is not a soapbox. It is the correct analysis, stated plainly. An autonomous weapons system and a mass surveillance apparatus in the hands of a Defense Secretary who publicly called Anthropic’s refusal a betrayal, who labeled safety restrictions as woke ideology, and who designated an American company with the same national security label we use for Chinese adversaries, is not a tool for defeating Beijing. It is a tool for something else entirely.
The oldest check on authoritarian power has always been the possibility that human beings in the security apparatus will refuse to follow orders when those orders turn against their own people. History is full of moments where that refusal mattered. Autonomous systems engineered out of existence. You do not need soldiers who might disobey. You do not need surveillance operators who might leak what they see. You do not need anyone to pull a trigger and live with what they did.
Anthropic drew a line around exactly those two capabilities. The United States government designated them a national security risk for drawing it. And the most thoughtful person defending the Pentagon’s position in that forum inadvertently explained why the line matters, in a sentence he described as a different soapbox entirely.
It is not a different soapbox. It is the only soapbox that counts.
And here is the part that virtually no one is talking about, perhaps because it requires stepping back far enough to see the full picture.
Anthropic did not lose a contract. They were banned from all government work, designated with the same national security label we reserve for foreign adversaries, and every company that does business with the Pentagon was told it must certify no contact with Anthropic anywhere in its operations. That is not a business dispute. That is a public execution designed to be watched. Every American technology company with government ambitions saw exactly what happened this week, and they understood the message with perfect clarity: maintain safety standards and we will destroy you, or remove them and we will reward you. OpenAI announced a Pentagon deal for classified networks within hours of Anthropic being banned. The race to the bottom did not begin gradually. It began that afternoon.
Think about what this means for American technology on the global stage. The United States has long told the world that its technology sector is trustworthy precisely because it operates within a framework of law, ethics, and independent corporate standards. American cloud infrastructure, American AI systems, and American software dominate global markets in part because the world believes they are not simply instruments of state power. That belief, worth hundreds of billions of dollars in economic advantage and immeasurable in geopolitical influence, just took a serious hit. Every foreign government, every international enterprise, every allied nation that has been debating whether to build its critical infrastructure on American AI just watched the American government blacklist its most safety-conscious AI company for refusing to build surveillance and autonomous weapons without restriction. The message they received is that American AI companies are ultimately instruments of whatever administration holds power, and their safety commitments last exactly as long as the government tolerates them.
We are told this is about defeating China. But China could not have designed a more effective strategy for undermining American technological credibility than the one the Pentagon executed this week on its own. They did not need to. We did it ourselves, in public, before lunch, and then wondered why no one is applauding.

