The Last Arms Race
I was just reading the Atlantic’s latest on Artificial Intelligence and it occurred to me that most people have no clue as to what has happened in the United States over the past twelve months. While public attention was consumed by election cycles, trade wars, and viral AI chatbots, a structural transformation took place that will shape the lives of billions of people for decades to come. The leading artificial intelligence laboratories in America, once celebrated as the vanguard of a new technological revolution driven by private innovation, have been drawn into a deepening partnership with the US federal government so comprehensively that the line between private enterprise and state apparatus has become almost impossible to locate.
This is not a conspiracy theory. It is documented in signed memorandums of understanding, in the text of Public Law 119-60, in executive orders issued from the White House, and in the public statements of the very companies and government officials involved. It is happening in plain sight, in legal language and press releases, and its implications are staggering.
For citizens of the United States and for every nation watching the global AI race with growing anxiety, the question is no longer whether artificial intelligence will be powerful enough to reshape civilization. The question is who will control it, under what authority, and with what accountability to the public. This article traces this transformation through government documents, defense legislation, and the statements of the people building it. What I found concerned me (primarily because I don’t trust an Administration that lies as much as this one), but it doesn’t seem to be concerning anyone else so let’s walk through my concerns…
In November 2025, President Trump signed an executive order launching what the Department of Energy calls the Genesis Mission. The name was chosen deliberately. This is the largest coordinated effort to merge artificial intelligence with government scientific infrastructure since the Manhattan Project, and the Department of Energy made that comparison itself. The mission mobilizes all 17 of America's National Laboratories, roughly 40,000 DOE scientists and engineers, and an expanding roster of private sector partners to build what the government describes as the most powerful scientific platform ever constructed. Its stated goals are to accelerate American energy dominance, strengthen national security, and double the productivity of American science and engineering within a decade.
The private companies involved are not minor players. Anthropic, OpenAI, Google, NVIDIA, Microsoft, Oracle, AMD, and Amazon Web Services have all signed collaboration agreements with the Department of Energy. By December 2025, twenty four organizations had formalized memorandums of understanding. Anthropic's multiyear partnership with DOE spans energy systems, biological sciences, and research productivity across the national laboratory network. OpenAI models are already running on classified networks at Los Alamos National Laboratory, deployed on the Venado supercomputer, which uses NVIDIA Grace Hopper Superchips and ranks among the twenty fastest computers on Earth. Lawrence Livermore, Sandia, and Los Alamos, the three nuclear weapons laboratories known collectively as the trilabs, have built a federated AI model called Chandler that trains across all three sites simultaneously, sharing model weights without exchanging the underlying classified data.
At Argonne National Laboratory outside Chicago, NVIDIA and Oracle are building a system called Solstice powered by 100,000 Blackwell GPUs, which will deliver over two thousand exaflops of AI performance. A second system called Equinox adds another 10,000 GPUs. The Genesis Mission Consortium, launched in early 2026, will serve as the permanent coordinating hub for all of this activity, managed by a partnership intermediary operated by RTI International. Berkeley Lab and Princeton Plasma Physics Laboratory are among the many labs that have published their own accounts of participation.
The scale and speed of this integration is without precedent. And it creates a dynamic that is qualitatively different from previous government technology programs. When the government built the interstate highway system or funded the early internet, it created infrastructure that the public could use on roughly equal terms. What is being built now is something else entirely.
Consider what this means in practice. A scientist at Oak Ridge or Los Alamos now has access to the most capable AI models in existence, running on classified networks with access to decades of proprietary government data spanning nuclear physics, materials science, genomics, and climate modeling. These models operate inside secure data enclaves at Impact Level 5 and 6 classification, connected to satellite imagery, signals intelligence, and logistics databases through zero trust gateway architectures. The researcher working inside this system is interacting with an AI that has been fine tuned on data no private citizen will ever see, optimized for problems no commercial product is designed to solve, and freed from many of the behavioral constraints applied to public versions of the same models.
Meanwhile, the public interacts with commercial versions of these same AI systems through subscription services. These consumer products carry content filters, usage policies, and behavioral guardrails that have been shaped by a combination of corporate liability concerns, political pressure, and, increasingly, federal oversight. In September 2025, NIST's Center for AI Standards and Innovation, known as CAISI, worked directly with OpenAI and Anthropic to identify security issues in their advanced systems and shape how those systems behave. Both companies published blog posts describing concrete changes they made as a result. This is not adversarial regulation. This is collaborative engineering between the government and the companies that build the tools hundreds of millions of people use every day.
A two tier system is emerging. At the top, a small number of credentialed researchers and national security officials work with the most powerful and least restricted AI tools ever built. At the bottom, everyone else gets a product that has been shaped in ways the public has limited ability to examine, challenge, or change. The companies are not hiding this. In mid 2025, the Department of Defense's Chief Digital and Artificial Intelligence Office awarded contracts to all four major frontier AI labs, Anthropic, Google, OpenAI, and xAI, for national security prototyping. Anthropic alone received an agreement with a ceiling of two hundred million dollars. Both Anthropic and OpenAI then offered their enterprise products to the entire federal government for one dollar per agency per year, a loss leader strategy designed to embed these tools so deeply into government operations that they become infrastructure.
If the Genesis Mission represents the handshake, the Fiscal Year 2026 National Defense Authorization Act represents the legal architecture. Signed into law on December 18, 2025, this 1,259 page statute contains dozens of provisions that collectively reshape how artificial intelligence is developed, deployed, governed, and controlled in the United States. The White House issued a formal Statement of Administration Policy confirming its support for the bill's direction.
Section 1535 of the NDAA creates the Artificial Intelligence Futures Steering Committee, co chaired by the Deputy Secretary of Defense and the Vice Chairman of the Joint Chiefs of Staff. This committee is not advisory. It is directed to analyze the trajectory of advanced AI technologies, including those that could enable artificial general intelligence. It will assess adversary AI programs, evaluate the operational effects of integrating advanced AI into military networks, and develop a strategy for what the law calls risk informed adoption. It must be operational by April 1, 2026, meet quarterly, and submit a public report to Congress by January 31, 2027. Apollo Research's analysis noted that this provision could be highly consequential for AI security, as it tasks the committee with proactively addressing systems that approach or achieve artificial general intelligence.
The original draft of the NDAA defined artificial general intelligence as AI capable systems with the potential to match or exceed human intelligence across most cognitive tasks. That definition was removed from the final text, but the committee's mandate to plan for it remains. Congress is preparing for a technology that may not yet exist, but which senior officials in both government and industry increasingly believe could arrive within years, not decades.
Section 1533 requires the Department of Defense to create a cross functional team that will build a standardized framework for assessing, governing, and approving all AI models used by the military. This team will set performance, security, documentation, ethical, and testing standards, and it must evaluate every major Defense Department AI system by January 2028. Section 1512 mandates a department wide cybersecurity and governance policy for AI and machine learning systems, covering everything from lifecycle security to protections against model tampering and data poisoning. Section 1532 bans the Pentagon from using AI systems developed in China, Russia, North Korea, or Iran, specifically naming DeepSeek and its parent company High Flyer. No contractor working with the Defense Department may use these tools either.
The NDAA also codifies outbound investment restrictions through the Comprehensive Outbound Investment National Security Act, amending the Defense Production Act to require mandatory notification of investments in Chinese entities involving advanced technologies like AI and semiconductors. The Treasury Department now has clear statutory authority to prohibit such investments entirely. Existing regulations already restrict investments in AI systems trained above specific computational thresholds, roughly ten to the twenty sixth floating point operations. Detailed analyses from White and Case, Pillsbury, Covington, and Clifford Chance confirm the scope of these new authorities.
Read these provisions together and a pattern becomes visible. The United States government is constructing a regulatory and institutional framework that will determine who can build the most powerful AI systems, who can invest in them, who can access them, and under what conditions. The NDAA did not include a federal moratorium on state AI laws, which the administration had pushed for aggressively, but a December 2025 executive order directed the Department of Justice to develop ways to challenge state laws the administration considers hostile to its AI agenda, and instructed federal agencies to withhold funding from noncompliant states.
There is a logic to all of this that deserves honest acknowledgment. Artificial intelligence is not a social media platform or a search engine. At sufficient capability levels, it becomes a tool that can accelerate weapons design, crack encryption, engineer biological agents, and conduct influence operations at a scale no human propagandist could match. Governments have always controlled technologies with this kind of destructive potential. You cannot privately operate a nuclear reactor without federal licensing. You cannot build ballistic missiles in your garage. The argument that AI of sufficient power belongs in the same regulatory category as nuclear technology is not inherently unreasonable.
But AI is also not a nuclear weapon. A warhead does one thing. An advanced AI system does everything. It can write legislation and evaluate loan applications. It can generate scientific hypotheses and compose political messaging. It can analyze medical records and predict criminal behavior. It can tutor children and surveil populations. The breadth of its application is precisely what makes concentrated control so dangerous.
When a government integrates the most powerful AI systems into its defense, intelligence, scientific, and administrative infrastructure, and simultaneously shapes how commercial versions of those systems behave for the public, it acquires a form of leverage that has no historical parallel. The Brennan Center for Justice warned in its analysis of the 2026 NDAA that the law deepens reliance on private technology contractors while weakening procurement transparency, raising long term risks for accountability, civil liberties, and cost control. The Center for Strategic and International Studies noted that the entanglement of frontier AI companies with national laboratories widens the attack surface for cyber espionage while creating new vectors for adversaries to exploit. A Lawfare analysis observed that as governments view AI through a national security lens, funding priorities and regulatory frameworks shift to favor classified, state controlled development, and research that might have flowed freely across borders becomes siloed within national laboratories operating under strict security protocols.
The structural risk is not that any single provision of law or any single partnership agreement is malicious. The risk is that the accumulation of these arrangements, each individually defensible, creates a system in which the most consequential technology of the twenty first century is governed by a small number of people, operating largely behind classification walls, with limited democratic oversight and no meaningful mechanism for public input.
This is not exclusively an American phenomenon. The European Union's AI Act, which enters full force in August 2026, classifies AI systems by risk tiers and imposes mandatory certification requirements on high risk applications, including those used in law enforcement, hiring, and critical infrastructure. China has implemented vertical regulations targeting recommendation algorithms, deepfakes, and generative AI, requiring providers to ensure their systems disseminate what Beijing calls positive energy. Saudi Arabia has built a centralized national data governance platform that critics argue normalizes surveillance as a condition of civic participation. Every major power is racing to control AI within its borders, and the tools of that control look different depending on the political system, but the direction is the same.
The Atlantic Council's January 2026 analysis identified a global battle of AI stacks, in which the United States, the European Union, and China are building increasingly incompatible approaches to how core digital AI infrastructure functions at home and abroad. The Trump administration's AI Action Plan, published in July 2025, made it explicit federal policy to export the American AI technology stack to allied nations, warning that failure to do so would be an unforced error. The administration has signed AI partnership agreements with Saudi Arabia and the United Arab Emirates and is preparing more for 2026. China, meanwhile, is leveraging its lead in open source AI models and its focus on applied AI to capture market share in developing economies with free models and deployment ready platforms.
For nations in the developing world, the choice is becoming stark. Adopt the American stack, with its security requirements, export controls, and embedded dependencies on U.S. corporate infrastructure. Adopt the Chinese stack, with its surveillance capabilities and political strings. Or attempt to build sovereign AI capacity, which requires capital, talent, and energy resources that few nations possess. A CSIS analysis of the U.S. AI diffusion framework noted that the approval process for accessing American AI chips creates dangerous delays for countries with ambitious data center plans, effectively forcing them to choose sides before they fully understand what they are choosing.
What is conspicuously absent from this entire architecture is any robust mechanism for democratic participation. The Genesis Mission Consortium is governed by memorandums of understanding between federal agencies and private companies. The AI Futures Steering Committee is composed of senior defense officials. The NDAA's oversight provisions require reports to congressional defense committees, not public hearings. The executive order on state AI preemption concentrates regulatory authority at the federal level while threatening to punish states that attempt their own approaches.
The Artificial Intelligence Civil Rights Act, reintroduced by Democratic lawmakers in December 2025, would have established protections for individual civil liberties when algorithms make consequential decisions about people's lives. It has not passed. The federal AI moratorium that would have blocked state regulation also failed, but not because Congress chose public accountability over deregulation. It failed because the political coalition was not large enough. The underlying pressure to centralize AI governance at the federal level, away from state governments and further away from individual citizens, continues to build.
A recent paper in Science documented how malicious AI swarms can threaten democratic processes by coordinating automated influence operations at scales that overwhelm human moderators and institutional safeguards. The Bulletin of the Atomic Scientists has shown that AI surveillance technology exported by both the United States and China correlates with democratic backsliding in countries with weak institutions. Academic research on the intersection of AI and democracy has found that while mature democracies can absorb surveillance technology without losing their institutional footing, fragile democracies cannot. The question America must ask itself is which category it currently falls into.
There is a phrase circulating among technology policy analysts that captures this moment with uncomfortable precision. They call it soft nationalization. Not a seizure of private companies, but a gravitational pull so strong that the distinction between public and private becomes academic. When the government is your largest customer, your security evaluator, your regulatory partner, and the provider of your most critical infrastructure, the word independence loses most of its meaning. When the government offers your product to every federal agency for a dollar, you have not been acquired. You have been absorbed.
Secretary of Defense Pete Hegseth declared in late 2025 that the leading AI models would soon operate on every classified and unclassified network in the Defense Department. The Pentagon's Acquisition Transformation Strategy described itself as an aggressive effort to field technology at a rate that outpaces adversaries. The GenAI.mil platform launched with models from Google and xAI. The AI Acceleration Strategy aims to convert the military into what it calls an AI first warfighting force.
None of this is secret. All of it is publicly available. And that may be the most unsettling part. The merger of the most powerful cognitive technology in human history with the most powerful administrative and military apparatus in human history is not happening in the shadows. It is happening on government websites, in press releases, in signed legislation. The architects of this transformation are not hiding. They are proud. They believe they are building something historic, and they are right. The question is whether what they are building will serve the public or whether the public will end up serving it.
The nuclear analogy that officials frequently invoke to justify government control of AI contains an irony they rarely acknowledge. The Manhattan Project succeeded in building the atomic bomb, and the government did maintain control of nuclear weapons. But the broader legacy of the nuclear era includes an arms race that brought humanity to the brink of annihilation, a classification regime so expansive it concealed government misconduct for decades, environmental contamination at dozens of national laboratory sites that remains unresolved, and a permanent national security state that fundamentally altered the relationship between American citizens and their government.
If artificial intelligence truly is the new nuclear technology, then the history of what happened last time should give every citizen pause. The power being concentrated today is greater than what was concentrated then, because AI touches not just weapons but every domain of human activity. The time to establish democratic oversight, transparency requirements, and enforceable civil liberties protections is now, while the architecture is still being built. Once the concrete sets, as every engineer knows, you cannot easily pour a new foundation.
The future arrived quietly, in memorandums of understanding and defense authorization bills. It is smarter than any previous technology, faster than any previous regulatory effort, and increasingly embedded in the institutions that govern daily life. Whether it remains accountable to the people it will affect most profoundly is not a technical question. It is a political one. And it is the most important political question of this century.
Postscript: Where This Goes.
Let’s now lay out what happens next, based on everything documented in this article and on the trajectory China is simultaneously pursuing. Let’s describe the most probable sequence of events, grounded in what is already in motion, and assign rough timelines to each phase.
The first thing to understand is that the United States and China are now running mirror image strategies, and the effect of both will be to eliminate the possibility of a neutral global AI commons within the next five years.
China’s 15th Five Year Plan, expected to be approved at the March 2026 National People’s Congress, will formally institutionalize what Beijing calls Military Civil Fusion as the primary mechanism for defense modernization. The AI Plus framework, already written into the Central Committee’s recommendations, embeds military requirements into civilian AI research from the ground up, producing what Chinese planners call “born dual use” technologies. The People’s Liberation Army rapidly adopted DeepSeek’s models in early 2025 and is already using them for intelligence analysis, drone swarm coordination, logistics, and cyber operations. Unlike the American approach, China does not maintain even a nominal separation between its AI companies and its military. There is no Chinese equivalent of the public debate about whether Anthropic or OpenAI should take defense contracts. The question was never asked because the answer was never in doubt.
But China is doing something the United States is not, and it may prove more consequential than any classified model running at Los Alamos. China is giving its AI away. DeepSeek’s models are released under open source licenses, free to use, free to modify, requiring no subscription, no credit card, no relationship with an American cloud provider. A Microsoft report published in January 2026 found that Chinese AI models have surpassed Western models in cumulative open source downloads since August 2025. DeepSeek holds 89 percent market share in China, 56 percent in Belarus, 49 percent in Cuba, 43 percent in Russia, and between 11 and 14 percent across African nations including Ethiopia, Zimbabwe, Uganda, and Niger. In many of these countries, DeepSeek comes preloaded on Huawei phones running HarmonyOS. Microsoft’s own researchers described open source AI as a geopolitical instrument capable of extending Chinese influence in areas where Western platforms cannot easily operate.
Now play these two strategies forward simultaneously.
By late 2026, the AI Futures Steering Committee will be operational and will have begun shaping the Pentagon’s long term framework for artificial general intelligence. The Genesis Mission’s integrated platform connecting 17 national laboratories will be running at meaningful scale, with the Solstice supercomputer at Argonne delivering over two thousand exaflops of AI performance. The cross functional team mandated by Section 1533 of the NDAA will be building the standardized framework that will govern which AI models the entire Defense Department can use and how they must behave. The outbound investment restrictions codified by the COINS Act will be generating their first enforcement actions, and Treasury will have tightened the computational thresholds that trigger mandatory notification of investments in AI systems connected to China.
Inside the United States, the practical effect will be a hardening of the two tier system I described in the article. Government researchers and national security personnel will work with increasingly powerful and unconstrained AI systems on classified networks. The public will use commercial products whose behavior is shaped by an expanding web of federal partnerships, security evaluations, and implicit political pressures. The gap between what the government’s AI can do and what the public’s AI can do will widen. And because frontier model development requires billions of dollars in compute, energy, and specialized talent, the number of organizations capable of operating at the top tier will shrink, not grow. By 2028, I expect no more than four or five entities in the Western world will be capable of training a frontier model, and every one of them will be deeply entangled with the U.S. national security establishment.
Internationally, the world will split into three zones.
The first zone will be the American stack. This includes the United States, the Five Eyes nations, the European Union with significant internal tension, Japan, South Korea, and the Gulf states that have signed AI partnership agreements with Washington. Nations in this zone will use AI systems built on American models, running on American cloud infrastructure, subject to American export controls and security requirements. They will benefit from the most capable models available but will have limited sovereignty over how those models behave, what data they are trained on, and what guardrails they carry. The EU will attempt to maintain regulatory independence through the AI Act, but the dependence on American compute and American models will constrain how far that independence can go in practice.
The second zone will be the Chinese stack. This includes China itself, Russia, Iran, much of Central Asia, and a growing number of nations in Africa, Southeast Asia, and Latin America that adopt Chinese AI tools because they are free, accessible, and come without the compliance requirements attached to American products. DeepSeek and its successors will be embedded in Huawei infrastructure across the developing world. These models will carry their own biases and political constraints. They will not discuss Tiananmen Square or Taiwan’s sovereignty, and their training data and behavioral shaping will reflect the priorities of the Chinese Communist Party. But they will work, they will be free, and for governments and populations that cannot afford the American alternative, they will become the default.
The third zone will be the contested middle. India, Brazil, Indonesia, Turkey, Saudi Arabia, and other large nations with independent ambitions will attempt to maintain relationships with both stacks while building some degree of sovereign AI capacity. Most will fail to achieve true independence because they lack the compute infrastructure, the energy supply, or the talent base. They will end up as clients of one system or the other, or they will oscillate between them depending on which great power offers the better deal in a given year.
By 2028 to 2030, we can expect the following consequences to be clearly visible.
First, AI will be the primary vector for great power competition, surpassing nuclear weapons in strategic importance. This is not because AI will replace nuclear arsenals but because AI will determine who can develop the next generation of weapons, who can conduct the most effective intelligence operations, who can win the economic competition for advanced manufacturing, and who can shape the information environment in which democratic and authoritarian systems compete for legitimacy. The nation with the best AI will not necessarily win a war. It will win everything that happens short of war, which is most of what happens.
Second, the open source AI movement, at least at the frontier level, will be effectively dead in the West. The combination of compute thresholds that trigger regulatory scrutiny, classification regimes that can lock away model weights, export controls that restrict who can access advanced chips, and the sheer cost of training frontier models will make it impossible for independent researchers or small companies to operate at the cutting edge. Open source will survive for smaller models and for specialized applications, but the systems that matter most, the ones capable of genuine scientific discovery, autonomous operation, and strategic reasoning, will be controlled by a handful of corporations operating under government supervision. China will continue to release open source models, but these will serve Beijing’s strategic interests, not the global commons.
Third, the democratic accountability gap will become a crisis. As AI systems take on larger roles in government administration, law enforcement, financial regulation, healthcare, and education, the decisions they make will affect hundreds of millions of people. But the systems making those decisions will have been shaped inside a governance framework designed for national security, not public welfare. The behavioral constraints, the training data curation, the safety evaluations, all of it will have been developed through processes that prioritize threat mitigation over democratic participation. Citizens will interact with AI systems every day whose inner workings they cannot inspect, whose design choices they had no voice in, and whose accountability runs upward to defense committees and steering groups, not downward to the public.
Fourth, the risk of catastrophic misuse will increase precisely because control is concentrated. The argument for centralized governance of AI is that it prevents dangerous actors from accessing powerful tools. But centralization also means that if the centralized authority itself is captured by corrupt, incompetent, or authoritarian actors, the damage is not distributed. It is total. A distributed AI ecosystem has many failure modes, but they are small and recoverable. A centralized AI ecosystem has fewer failure modes, but the ones it has are existential. Every American who has watched the peaceful transfer of power become a contested event in recent years should think carefully about what it means to hand the keys to the most powerful cognitive technology in history to whichever political faction controls the executive branch.
Fifth, and most importantly, the window for establishing democratic guardrails is closing. The architecture described in this article is being built now. The Genesis Mission Consortium is operational. The NDAA provisions are being implemented on statutory deadlines measured in months. The AI companies have already accepted their roles as government partners and are competing for deeper integration. Once institutional arrangements of this magnitude are in place, they generate their own constituencies, their own bureaucratic momentum, their own economic dependencies. Reversing them becomes not impossible but politically and practically excruciating. The time to act is measured in months, not years.
I do not know whether the people building this system are acting in good faith. Some of them almost certainly are. The threat from China is real. The potential for AI to accelerate weapons proliferation is real. The need for some form of governance is genuine. But good intentions do not prevent bad outcomes when the structures being built lack transparency, accountability, and mechanisms for democratic correction. The Manhattan Project was also built by people acting in good faith who believed they were saving civilization. The world they created was safer in some ways and more dangerous in others, and the ordinary citizens whose lives were transformed by their decisions had no say in the matter until it was far too late.
We are at that moment again.


Someone said to me in email that this is what government should be doing…
My response: Sure, on the surface this is exactly what government should be doing. Protecting the country. Funding science no company would fund alone. Keeping dangerous technology out of adversary hands. If that were the whole story I would be writing about something else.
But here is what is actually happening. The government is building AI systems trained on classified nuclear weapons data that no citizen will ever be allowed to inspect. It is standing up a committee to plan for artificial general intelligence that answers to defense leadership, not to you. It is funding the same four companies that make the AI you use every day, making them dependent on defense money, which means the tools that will soon decide whether you get a mortgage, what medical treatment your doctor recommends, which of your social media posts get suppressed, and whether your name appears on a watch list are being built by companies whose primary customer is the Pentagon.
You get the consumer version. The government gets the real one.
And here is the part that should keep you up at night. The entire reason government exists is to prevent any single entity from becoming so powerful that no one can check it. That is the whole point. But what happens when the government itself becomes so cognitively augmented, so far ahead of every other institution in society, that Congress cannot understand what it is doing, courts cannot evaluate it, journalists cannot investigate it, and citizens cannot even describe it accurately enough to protest?
You do not get a warning when that line is crossed. You just wake up one day and the balance of power between the government and the governed has shifted permanently, and no one voted on it, and there is no mechanism to shift it back.
That is not a hypothetical. The legislation is signed. The systems are running.