No Undo Button
When people see the Virgin Mary in a piece of toast or share grainy Bigfoot clips as “proof,” it is fair to wonder whether we can handle basic evidence at all. And if we struggle with that, how are we going to manage entities that can outthink, outmaneuver, and outscale us? If our history is a case study in unrestrained greed and power grabs, why should AI be any different? These are not fringe questions. They go straight to the heart of whether “AI safety” is even a coherent project for a species like ours.
The idea that we will “control AI” carries a hidden assumption. It presumes that the institutions and people in charge are rational, informed, and aligned with the long term interests of humanity.
They are not. And the people at the top know it.
Human cognition runs on shortcuts that made sense on the savanna and fail catastrophically in the modern world. We see patterns that aren’t there. Faces in clouds. Saints in stains. Agency in randomness. We are wired to detect intention everywhere, which is why people attribute purpose to storms and consciousness to chatbots. This same machinery will lead millions to trust AI systems with wisdom and morality they do not possess, making manipulation not just possible but trivially easy.
We prefer stories that feel true over data that is true. Emotional resonance beats statistical evidence every time. This is why conspiracy theories outcompete policy papers and why a compelling narrative about miracle cures spreads faster than clinical trials can debunk it. AI systems are already generating persuasive content tailored to individual psychological profiles. They will get better. We will not.
We treat technology as magic. If it works and someone in authority says it is safe, we believe them. The same deference that once went to priests and oracles now goes to algorithms and executives in black turtlenecks. When systems become opaque even to their creators, “trust the model” becomes the path of least resistance.
These are not minor bugs in human cognition. They are the operating system. And the people building the most powerful AI systems in history understand this better than almost anyone.
That is the part no one wants to say out loud.
Marc Andreessen did not become a billionaire by misunderstanding human psychology. David Sacks did not build his career in venture capital by ignoring how people actually make decisions. These are men who have spent decades studying what makes products viral, what makes people click, what makes investors overweight hype and underweight risk. They know exactly how irrational the public is. They have profited from it their entire careers.
So when Andreessen publishes a manifesto calling AI regulation “a form of murder” and listing sustainability, social responsibility, and risk management as “enemies” of progress, he is not making an honest philosophical argument. He is making a bet. He is betting that the public will not understand what is happening until it is too late, and that by the time they do, he will be too rich and too powerful for it to matter.
When Sacks takes a position as AI Czar while maintaining more than 400 investments in AI companies, receiving ethics waivers that government experts call “sham waivers” and “like a presidential pardon in advance” for what would otherwise be criminal conflicts of interest, he is not acting in the public interest. He is locking in gains. He is using the machinery of government to remove the guardrails that might slow down the systems that make him money.
These are not true believers. They are cynics who understand probability. They know that racing to build systems you cannot control creates catastrophic downside risk. They also know that the upside flows to them and the downside flows to everyone else. That is not ideology. That is calculation.
The public, meanwhile, has no idea what is coming.
Most people think AI is a better search engine. Maybe it writes emails or generates pictures. They do not understand that frontier AI systems are approaching the ability to outperform humans at most cognitive tasks. They do not understand that these systems can already generate infinite persuasive content optimized for engagement over truth. They do not understand that the same technology that autocompletes their text messages is being scaled toward systems that may be impossible to turn off.
They do not understand because no one with power has any incentive to explain it to them.
The AI companies do not want the public to be frightened. Frightened publics demand regulation, and regulation slows growth. The investors do not want scrutiny. Scrutiny reveals conflicts of interest and punctures valuations. The politicians do not want to look weak on competition with China, so they repeat the phrase “we must win the AI race” without ever asking what it means to win a race toward something you cannot control.
And so the public hears “artificial intelligence” and thinks of movie robots and talking assistants, not of systems that will reshape every institution in society faster than any human process can adapt.
This is not an accident. It is a strategy. Keep the public calm. Keep the money flowing. Lock in the gains before anyone understands what is being built.
Here is what makes this different from every previous technology.
There is no going back.
When we built nuclear weapons, we could at least theoretically dismantle them. The knowledge remained, but the physical bombs could be reduced. When we built social media platforms that destabilized democracies, we could at least imagine regulation or breakup. The systems were bounded. They ran on servers we controlled.
AI is not like that. Once the models are trained and the weights are distributed, they exist. They can be copied. They can be run on commodity hardware. They can be modified and recombined. There is no central repository to shut down, no single point of control. The open source versions already spreading across the internet will not disappear if OpenAI or Anthropic goes out of business. They will proliferate.
And the capability curve is not slowing down. Each generation of models is more powerful than the last. The gap between what exists in research labs and what the public has seen is measured in years and orders of magnitude. By the time ordinary people understand what these systems can do, the next generation will already be in development.
This is the structural reality that makes AI different from every technology that came before. There is no undo button. There is no Manhattan Project in reverse. The genie does not go back in the bottle. The question is not whether we can stop this. The question is whether we can shape it at all before the window closes.
And right now, the people shaping it are the ones who profit from speed and lose from caution.
In December 2025, the Trump administration signed an executive order directing the Justice Department to challenge more than 100 state AI laws. The order frames any law requiring AI systems to “alter their truthful outputs” as unconstitutional interference with innovation. This language sounds technical but its meaning is clear. If a state passes a law requiring AI companies to check for bias, to correct misinformation, or to disclose how their models work, the federal government will sue to block it.
Andreessen celebrated. “A 50 state patchwork is a startup killer,” he posted. “Federal AI legislation is essential.”
But there is no federal AI legislation. There is only the removal of state protections, leaving a vacuum that industry will fill on its own terms.
Even Steve Bannon, not known as a progressive critic, has pushed back. “Right now, you have more regulations, ten times more regulations, to open a nail salon on Capitol Hill than you have into one of the most promising yet one of the most dangerous technologies ever invented,” he said after lobbying against Sacks at Mar a Lago. “Where’s the risk mitigation? I haven’t seen it.”
He has not seen it because there is none. Risk mitigation costs money and slows deployment. The people in charge have decided that those costs are unacceptable. They have decided this while holding billions of dollars in AI investments. They have decided this while receiving government waivers that let them shape policy without divesting. They have decided this while telling the public that everything is fine.
When Oppenheimer watched the first atomic test, he recalled a line from the Bhagavad Gita. “Now I am become Death, the destroyer of worlds.” He and his colleagues knew what they had built. They tried, and largely failed, to put the genie back in the bottle. But at least they understood what they were holding. At least they had the honesty to be afraid.
The people building and governing AI today have convinced themselves, and are working hard to convince you, that there is no genie. That this is just another technology. That markets will sort it out. That caution is cowardice. That anyone who asks hard questions is a Luddite who does not understand progress.
They are telling you this while cashing checks that depend on you believing it.
Maybe they are right. So let’s assign some probabilities.
The optimists being fully right, with benevolent technology, adapting institutions, and overblown risks? Maybe 10 to 15 percent. This requires too many things to go well simultaneously. It requires alignment to be easier than it looks, institutions to be more adaptive than they have ever been, competitive dynamics to somehow not create races to the bottom, and the conflicts of interest currently driving policy to somehow not matter.
Muddling through with significant but recoverable harms? Maybe 35 to 40 percent. Some disasters, some course corrections, democracy wobbles but holds, AI remains powerful but tool-like.
Serious structural damage, including turbocharged authoritarianism, pervasive manipulation, eroded shared reality, and mass displacement without adaptation? Maybe 30 to 35 percent.
Loss of meaningful human control? Maybe 15 to 20 percent.
Add those last two together and you get a coin flip chance of outcomes ranging from very bad to catastrophic. And there is a certain irony in asking an AI to assess these odds, which I will leave you to sit with.
But here is what matters. Even if you are more optimistic than this, even if you give the sunny scenario a 50 percent chance, we have made an irreversible bet with asymmetric consequences. You do not need certainty of disaster to want guardrails. You need only a nonzero probability of catastrophe combined with people actively removing the guardrails for profit.
We have the nonzero probability. We have the people removing the guardrails. We have the profit motive. We have the irreversibility.
That is the situation we are in.
So ask yourself the questions that the people running this system cannot honestly answer.
Do you trust that investors dismantling AI regulations while holding billions in AI assets are acting in your interest rather than their own?
Do you believe that a technology capable of generating infinite persuasive content and eventually outperforming humans at most cognitive tasks will be deployed wisely by companies whose legal obligation is to maximize shareholder value?
Do you think the same species that cannot agree on whether vaccines work or elections are legitimate will suddenly develop the collective wisdom to govern something this powerful?
Do you believe that “China will win if we slow down” is a reason to accelerate, or do you recognize it as the same logic that built fifty thousand nuclear warheads?
Do you believe these executives and investors have thought carefully about the risks and concluded they are manageable? Or do you believe they have thought carefully about the risks and concluded that the rewards flow to them while the consequences flow to you?
These are not rhetorical questions. They are the only questions that matter. And the people making the decisions cannot answer them honestly, because honest answers would reveal what this actually is.
Not a bet on progress. A bet on you not figuring it out in time.

