Devils in the Details
The fight between Anthropic and the Pentagon was reported mostly as a story about corporate principle versus national security necessity. Anthropic drew two red lines, no fully autonomous weapons and no mass domestic surveillance, while the Pentagon insisted on language permitting use of the AI for all lawful purposes. Anthropic refused, and the Trump administration responded by calling the company a national security risk, banning its technology from every federal agency, and having the president personally call it a leftwing operation making a disastrous mistake.
The autonomous weapons piece is real and worth its own conversation. But it was the surveillance clause that drove the final breakdown. And to understand why, you have to understand what the law already permits, what AI adds on top of it, and why that combination should concern every American who is not yet concerned.
Start with what is already legal. Under current United States law, it is entirely lawful for government agencies to purchase commercially available personal data from private data brokers. No warrant required. No judicial oversight. The agencies use their institutional credit cards the same way a business buys a mailing list. The Defense Intelligence Agency told Congress in 2021 that it buys bulk smartphone location data from commercial brokers and does not seek warrants to do so, based on its reading of Supreme Court precedent. In 2024, declassified correspondence confirmed that the NSA purchases Americans' internet browsing records from commercial brokers the same way, without a warrant. Senator Ron Wyden of Oregon, after a three year investigation, concluded that agencies had effectively been using their purchasing power to circumvent what the Fourth Amendment would otherwise require.
A bill that would have closed this loophole passed the House of Representatives in 2024. The Senate killed it. So the pipeline remains wide open today.
This is the legal foundation the Pentagon wants to build on. It is not a hypothetical. The agencies are already buying the data. The question is what happens when you attach a frontier AI system to what they are already doing.
The answer to that question is the reason Anthropic drew the line it drew.
Before AI, bulk data purchases had a practical ceiling. Analysts were human. They could only review so many files. Location records for thirty million people are theoretically rich and practically unworkable without enormous human labor. The data sits in a warehouse and most of it never gets read. This is what privacy scholars have called practical obscurity, and for decades it provided a functional, if informal, limit on how invasive government data collection could actually become.
Modern AI eliminates that ceiling entirely. As Samir Jain, the vice president of policy at the Center for Democracy and Technology, put it directly: "If you buy up massive amounts of data and allow AI to analyze it, you may end up, in effect, engaging in mass surveillance of Americans through that process. It's not currently restricted by law or prohibited by law."
What AI makes possible, at industrial scale and near zero marginal cost, is something qualitatively different from anything that came before it. Location traces become pattern of life reconstruction. The system reads where you go every day, derives that Wednesday mornings you attend a mosque, that you visited a Planned Parenthood clinic twice last year, that you were present at a protest in October, that you meet the same person every two weeks in a neighborhood you do not live in. No human analyst spends weeks on this. The AI does it in seconds, for everyone in the dataset simultaneously.
Browsing and app data become inference engines. Your political views, your health conditions, your financial vulnerabilities, your religious beliefs, your sexual orientation all become derivable at scale from the pattern of what you read and click and search. The FTC took action against data broker X-Mode/Outlogic in 2024 over the sale of sensitive location data, action that drew on prior reporting that the company had harvested location data from Muslim prayer apps and dating apps and sold it to US military contractors. That is not a corner case. It is a description of the market as it currently operates.
And then the AI links all of it together. Geolocation data, browsing history, app usage, financial records, social connections, all assembled into a single profile on any individual whose data has ever touched a commercial source, which is to say virtually every person in the country. The Office of the Director of National Intelligence has already proposed a centralized marketplace to make it easier for intelligence agencies to purchase this kind of data. Palantir, which works directly with the Pentagon, already applies machine learning algorithms to exactly these kinds of aggregated commercial datasets.
This is not a future risk. ICE has deployed location tracking systems explicitly marketed for use at protests. The Trump administration's "catch and revoke" program to identify students with alleged sympathies toward Gaza has been assisted by AI review of social media accounts. The administration has used these tools against immigration activists, protesters, and political opponents. The trajectory is not ambiguous.
Now ask the question that the Pentagon's behavior makes unavoidable. If the agencies genuinely have no interest in using AI for domestic surveillance of Americans, as Pentagon spokesman Sean Parnell asserted publicly, why did they fight so hard to keep the contractual door open? Why did they spend months in negotiations, invoke threats of the Defense Production Act, call the CEO a liar with a god complex, get the president to post about it on Truth Social, and deploy the full institutional weight of the Department of Defense against a single clause in a single contract? And why, in the final hours of negotiations, did they offer to accept all of Anthropic's existing terms on one condition, that Anthropic delete one specific phrase, the phrase about "analysis of bulk acquired data"? Amodei disclosed this in a memo to employees. "That was the single line in the contract that exactly matched the scenario we were most worried about," he wrote.
The Pentagon told us what it wanted. It wanted the AI applied to the commercially purchased bulk data pipeline. That is what this fight was about.
OpenAI stepped in within hours of Anthropic's blacklisting and agreed to the "all lawful purposes" framework Anthropic refused. Its original contract used the word "private information" where surveillance was concerned, language that would have left geolocation data, browsing records, and financial information purchased from data brokers entirely available for AI analysis. Only after an employee revolt and a wave of public cancellations did OpenAI amend the language to explicitly cover commercially acquired data. Altman admitted the original deal "just looked opportunistic and sloppy." The amendment is better. Whether it holds under classified operational pressure is a different question, and it is one OpenAI's critics have not stopped asking.
Americans who have followed the global conversation about AI surveillance tend to locate the threat in Beijing. That framing is accurate as far as it goes. China has deployed up to 600 million surveillance cameras equipped with AI facial recognition. In Xinjiang, the Integrated Joint Operations Platform aggregates data from cameras, iris scanners, checkpoints, and financial records to flag behavior and trigger detention automatically. Chinese firms have exported this surveillance architecture to more than 80 countries. It is a genuinely chilling model of what AI enabled social control looks like at full deployment.
But the American model under construction has a different architecture and, arguably, a more insidious legal foundation. China built its surveillance state through overt government infrastructure. The state owns the cameras and the networks. The coercion is visible and direct. What the American data broker system has built is a private market that achieves surveillance at comparable scale while maintaining the legal fiction that no government collection is occurring. The state does not need to build the network. It purchases access to one that already exists, assembled voluntarily by the commercial decisions of hundreds of millions of people using apps, navigating with their phones, and browsing the web. The legal distinction between that and a state owned surveillance network is real. The practical distinction, once AI is applied to the analysis, is vanishing.
The difference that matters most between China's model and the emerging American one is not technical. It is political. China's surveillance state was built to serve a system with no meaningful mechanism for citizens to challenge government power. The American system still has courts, elections, a free press, and constitutional protections, though all of these are under stress in ways that are not subtle. The danger of attaching AI to a bulk data pipeline that already circumvents warrant requirements is not that it creates a Chinese style system overnight. It is that it creates the infrastructure for one, quietly, inside the legal framework of a democracy that may not remain a functioning democracy indefinitely. Infrastructure built for counterterrorism gets used for immigration enforcement. Infrastructure built for immigration enforcement gets pointed at protesters. Infrastructure built for protesters gets pointed at political opponents.
Anthropic drew a line against this. The Pentagon decided that line was unacceptable.

