Americans, Under Surveillance
My last note was about the fight between Anthropic and the Pentagon, I described how the United States government has been legally purchasing bulk personal data about Americans from commercial data brokers for years, without warrants, and why attaching a frontier AI system to that pipeline would transform it into something qualitatively more dangerous. The Pentagon wanted that capability. Anthropic refused to enable it. The Trump administration designated Anthropic a national security risk for holding that position.
A friend asked a similar question. If the government can do this, can private companies do it too?
Yes. Any of them. And most of the time, with even fewer constraints than the government faces.
My friends next question was, why don’t Americans care?
My question back to him was, do Americans understand what is happening and the implications?
So, we discussed those points and I documented them here.
The same data broker market that sells location records, browsing histories, financial profiles, and behavioral data to the Defense Intelligence Agency and the NSA sells the identical data to hedge funds, insurers, landlords, employers, political campaigns, private investigators, and foreign governments operating through domestic intermediaries. There is no federal law that prohibits it. There is no registry of who is buying what about whom. There is no requirement to tell you it is happening. You have no right to see your own file. And because the data was technically generated by your own voluntary use of apps and websites, the companies selling it argue, successfully, that no one's privacy has been violated at all.
What I want to do here is make that concrete, because abstractions about data ecosystems do not convey what this actually means for the average US citizen.
A mid-tier data broker holds, on average, between 1,500 and 3,000 individual data points on every American adult. That includes your name, address, and phone number, which you already know. It also includes your precise location history derived from every app on your phone that requested location access, which you may not have thought about carefully. That means the broker knows which medical offices you have visited, which houses of worship, which political events, which bars, which clinics, which therapists' offices, and which addresses you sleep at regularly that are not your own. imagine some marketer wanting to know the people who are likely cheating on their spouse…
It also includes your browsing history, purchased from your internet service provider or assembled from trackers embedded in websites you visited. It includes your purchase history, assembled from loyalty programs, credit card data sales, and retail partnerships. It includes inferences drawn from all of this, your likely income range, your estimated net worth, your probable political affiliation, your health status as inferred from the sites you visit and the products you buy, your likely religion, your sexual orientation as inferred from your behavior, and a score estimating how financially vulnerable you are at this moment. That last data point is called a financial stress score, and it is sold openly.
None of this required a warrant. None of it required your consent beyond the terms of service agreement you did not read when you downloaded a free flashlight app in 2019.
Now add a frontier AI system to that file. The following examples are not hypothetical. They describe capabilities that exist today, that are being deployed commercially, and that operate entirely within current United States law.
Your employer buys a data package on all current employees and runs it through an AI model designed to identify union organizing risk. The model flags you because your location history shows you had lunch three times in the past month with a known labor organizer, you visited the website of a union law firm, and your financial stress score has risen sharply, which the model correlates with increased receptivity to organizing. You are not told. There is no law requiring disclosure. Within a week you are moved to a less critical project.
An insurance company you have never interacted with purchases a behavioral data package and builds a health risk profile on you before you ever apply for coverage. The profile shows you visited a neurologist twice last year, searched for information about a specific medication associated with a chronic condition, and your grocery purchase history includes items correlated with a particular dietary restriction associated with that condition. When you apply, you are quoted a premium 40 percent higher than your neighbor. You are told only that the premium reflects your risk profile. You have no legal right to see the data that produced it.
A landlord in the city you are moving to purchases a tenant screening report assembled by an AI from your public records, your purchase history, and data broker files. The report assigns you a low tenancy score partly because your location data shows you spent three nights a week for four months at an address associated with an eviction filing that was not yours, the address of a friend you were helping through a dispute. You are denied the apartment. The screening company is not legally required to tell you which data points drove the decision.
A political campaign purchases location data showing everyone who attended a specific rally, cross-references it with consumer profiles, and runs an AI targeting system to identify the fifty people in that group most likely to be persuadable, most financially stressed, and most susceptible to a specific emotional message. The AI then generates personalized versions of that message for each of them, adjusted for their inferred psychology, and delivers them through paid social media ads that are labeled as political advertising but carry no information about the data used to target you specifically.
Then there is the door that faces outward.
Congress passed a law in 2024 restricting data broker sales to China, Russia, North Korea, and Iran directly. What that law does not address is the purchase of American personal data through domestic intermediaries, shell companies, foreign subsidiaries of American firms, or any buyer from the other 190 countries on earth. A 2025 UK government report concluded that foreign adversaries can and do purchase sensitive data on American citizens, infrastructure workers, and political figures using brokers as the mechanism. The data on where a defense contractor's employee goes every morning, which route a federal judge takes home, and what a congressional staffer's financial vulnerabilities are is all sitting in the same market.
China does not need to hack your phone. It can buy a detailed behavioral profile on you, assembled from your own voluntary app usage, delivered as a commercial data product, legally, from a broker in Delaware.
There is a commercial use case that almost never appears in the public conversation about data privacy, possibly because the people most capable of writing about it have the most to lose from its exposure.
Nothing in American law prohibits a company from purchasing bulk data about the employees, customers, or partners of a competitor and running AI analysis on it for competitive advantage. Pattern of life reconstruction works on professional targets the same way it works on individuals the government wants to monitor. Where your competitor's engineers go after work, which recruiters they are meeting, which conferences their product team attended, what technical topics their employees have been searching, whether their key executives are showing signs of financial stress that might make them recruitable, all of this is derivable from commercially available data with AI applied to the analysis.
Hedge funds have been doing versions of this for years. The difference now is that the same capability, once available only to firms with nine-figure data budgets and teams of quantitative analysts, is accessible to any company with an API key and money.
Here is where the commercial risk diverges most sharply from the government one, and why in some respects it is the larger threat.
When the government surveils you and uses that information to harm you, you have constitutional claims. They are difficult to pursue, the agencies have invoked state secrets successfully for decades, and the courts have not always been helpful, but the framework for challenging government overreach exists and occasionally works. Edward Snowden's disclosures produced a public reckoning. Senator Wyden's three year investigation produced documented findings. Anthropic's refusal to enable AI assisted bulk data analysis produced a national news story.
When a private company builds an AI profile of you from commercially purchased data and uses it to deny you a job, raise your insurance premium, decline your rental application, or sell an analysis of your psychological vulnerabilities to a political campaign, your remedies are a patchwork of state privacy laws that vary enormously by jurisdiction, a Federal Trade Commission whose appetite for enforcement has been deliberately narrowed under the current administration, and whatever your state legislature happened to pass before its session ended. In most states, that adds up to very little.
The Biden administration's Consumer Financial Protection Bureau proposed a rule in late 2024 that would have classified data brokers as consumer reporting agencies, requiring them to comply with accuracy, consent, and access obligations. The Trump administration quietly killed it in early 2025. The Fourth Amendment Is Not For Sale Act, which would have prohibited government agencies from purchasing data they would otherwise need a warrant to obtain, passed the House in 2024 and died in the Senate. There is no comparable federal bill restricting commercial use at all.
Privacy advocates have spent years trying to explain why surveillance matters even to people who believe they have nothing to hide. The argument that works, because it is true and observable, is not about secrets. It is about behavior change.
When people know or reasonably suspect they are being watched, they act differently. They attend fewer protests. They search for medical information less candidly. They are more careful about who they call and what they say. They self-censor on social media. They avoid being seen in places that could be misread. Researchers call this the chilling effect, and it does not require the surveillance to be accurate, targeted, or even real. The knowledge that a system exists that could flag your behavior is sufficient to change it.
A country where the population has internalized the habit of self-censorship because an invisible commercial surveillance infrastructure might be watching is a country with a diminished public square, whether or not any particular agency ever looks at any particular file. The Chinese Communist Party did not need to monitor every conversation in Xinjiang to change the behavior of every person in Xinjiang. It needed people to believe monitoring was possible. The American commercial data ecosystem, before AI and certainly after it, has already achieved that condition for tens of millions of people, not through state coercion but through the logic of a market in which your behavior is the product and your anxiety is someone else's profit.
The architecture differs from China's surveillance state in ways that matter legally but not practically. China built its system through overt government infrastructure. The state owns the cameras, the networks, and the analysis. Coercion is visible and direct, which is why it generates international condemnation and why activists in Hong Kong wore masks and carried umbrellas to defeat facial recognition during the 2019 protests.
The American system achieves comparable depth of coverage through a private market that the government can access by purchase, and that private actors can access the same way. The legal distinction between a state owned surveillance network and a commercial one whose output the state purchases is real in constitutional terms. The practical distinction, once AI is applied to the analysis, is vanishing. You can protest against a government program. You cannot opt out of a market.
There is one more difference worth naming. China's surveillance state was built to serve a system with no meaningful mechanism for citizens to challenge government power, and it is deployed openly enough that people understand what it is. The American version is being assembled quietly, piece by piece, through commercial transactions that each appear individually innocuous, in a legal environment where most of the population does not know their data is being sold, to whom, for what purpose, or what AI is doing with it once it arrives. Authoritarianism built in plain sight can be resisted. Authoritarianism assembled from your own app permissions, your loyalty card, and your mortgage payment history, and then sold to whoever can afford the subscription, is harder to see and therefore harder to fight.
The Anthropic fight with the Pentagon was significant for the reason I described in the earlier piece. The Pentagon revealed exactly what it wanted when it asked Anthropic to delete one specific phrase from the contract, the phrase about analysis of bulk acquired data. That is the capability at the center of everything I have described here.
But the Pentagon is one buyer. The data broker market serves thousands of them every day. And unlike the Pentagon, most of those buyers face no Anthropic, no Dario Amodei willing to absorb a national security designation rather than hand over the key. They face a market that is open, legal, lightly regulated, and growing.
The question the Anthropic story raised was whether AI should be permitted to make bulk surveillance of Americans faster, cheaper, and more powerful for the government.
The question this story raises is why don’t Americans care... it’s complicated…


Thoughtful.. I am just back from a trip down under. It occurred to me that Americans know a lot less about what it means to be American than people from other countries. It is easier to get information concerning highly sensitive and personal information about a person when it comes to financial interests than it is to find out if a person had a broken arm in life. At the end of the day, we do not live in a free society with any protections. We live in a box that has controls and variability on freedom with the dependency of wealth and control. I wouldn't say there are any specific masters (not known of course) but the system itself supports people in political power and wealth controlling everything down to the data. We can see fragments of what is available to them but the real truth is something much more gross and horrifying. Move? Go off grid? Is that even possible today? Where is off? The world is monitored from space... Put me back into the matrix please.