Snake Oil
I love technology. I need to say that upfront, because what follows might sound like I don’t.
I remember the first time I used Google in 1998 and understood, viscerally, that the world had changed. I remember the iPhone keynote. I remember when solar panels crossed the cost threshold that made them cheaper than coal in most of the world, a victory decades in the making, dismissed as impossible by serious people right up until it happened.
Technology delivers miracles. Real ones. The skeptics are often wrong.
But here’s what I’ve learned from watching this industry for twenty-five years. The fact that technology sometimes delivers miracles is precisely what makes it such effective cover for fraud. The real achievements create a halo that protects the fake ones. And right now, some of the biggest names in tech are selling you something that isn’t going to work.
I’m not talking about incremental dishonesty. I’m talking about a specific genre of promise that has emerged from Silicon Valley, a techno-utopianism that uses the language of science to sell what is essentially a religious proposition. Salvation is coming. The singularity. Mars. AGI. Just keep the faith, keep investing, and don’t ask too many questions about the physics.
Let’s talk about Mars.
Elon Musk frames Mars colonization as a backup drive for humanity, an insurance policy against extinction. This framing is smart because it appeals to people who think in terms of expected value. Even if the probability is low, the payoff of saving humanity is infinite, so the expected value calculation favors trying.
But this argument has a flaw. It assumes Mars colonization is possible at all on relevant timescales. And when you look at the actual physics, the picture gets bleak fast.
Mars has no magnetic field. This is not a design choice. It is a consequence of the planet’s core cooling billions of years ago, and we have no technology to restart it. On Earth, our magnetic field deflects solar radiation and cosmic rays. Without it, our atmosphere would be stripped away and we would be sterilized.
According to the European Space Agency, astronauts traveling to Mars would receive radiation doses up to 700 times higher than on Earth. A NASA study found that radiation-induced mortality risk could exceed 5% to 10% for a single Mars mission. Marco Durante, an ESA physicist who studies space radiation, has said flatly that as things stand today, we cannot send humans to Mars because of radiation.
The standard response is that colonists would live underground. Fine. But underground in what? Martian soil is saturated with perchlorate salts at concentrations of 0.5% to 1%, toxic compounds that interfere with thyroid function. NASA’s Chris McKay has warned that anyone planning to live on Mars needs to think carefully about the poisons on the surface, because very small amounts are dangerous and the fine dust gets into everything.
I’m not saying these problems are unsolvable in principle. I’m saying they are not close to being solved, and the timeline Musk sells, millions of people on Mars within decades, is not engineering optimism. It is fantasy.
Here’s a question worth asking. If you had $100 billion to reduce existential risk to humanity, would you spend it on Mars colonization, or would you spend it on pandemic preparedness, nuclear de-escalation, and climate adaptation? The expected value calculation only favors Mars if Mars actually works. And the base rate on promised revolutionary technologies that don’t pan out is much higher than technologists like to admit.
We were promised flying cars. Seriously. Glenn Curtiss built a prototype in 1917. For over a century, serious engineers and well-funded corporations have tried to make them practical. They remain toys for billionaires. In the 1950s, Ford and Studebaker built concept cars powered by nuclear reactors. The magazines promised we would drive for thousands of miles without refueling. The cars never worked because you cannot shield passengers from radiation without making the vehicle weigh 50 tons.
The technologies we remember are the ones that succeeded. The ones that failed disappear from memory, creating a survivorship bias that makes technological progress look more inevitable than it is. For every transistor, there are a hundred nuclear cars.
Now let’s talk about AI.
I use AI tools every day. They are genuinely useful. The question is not whether AI is real. It is whether the specific promises being made, AGI that will solve climate change and cure cancer and generate infinite wealth, are grounded in reality or are another nuclear car.
Here’s what the people building these systems are telling us. Sam Altman, CEO of OpenAI, said at Davos that future AI development will require an energy breakthrough. His exact words were, “There’s no way to get there without a breakthrough.” He specifically mentioned nuclear fusion, a technology that has been twenty years away since the 1950s.
This is the CEO of the leading AI company admitting that his own roadmap depends on technology that does not exist. He’s invested $375 million in a fusion startup called Helion, betting that they will crack a problem that has defeated seventy years of effort. Aneeqa Khan, a fusion researcher at the University of Manchester, has pointed out that fusion is already too late to address the climate crisis.
Meanwhile, the AI systems we are building today are consuming staggering resources. Data centers used about 415 terawatt-hours of electricity globally in 2024, roughly 1.5% of all electricity on Earth, and the International Energy Agency projects this will more than double by 2030. Google’s single data center in Council Bluffs, Iowa used 1 billion gallons of water in 2024, enough to supply all of Iowa’s residential needs for five days.
We are burning the present to build a future that requires technology we don’t have.
If you work in tech, you might be thinking that none of this affects you personally. You’re building the future. You have equity. You understand the technology better than the critics.
Consider the layoff data. Tech companies cut approximately 200,000 workers in 2023 and another 95,000 in 2024. The cuts are continuing into 2025. Meta, Google, Amazon, Microsoft, all slashing headcount while their executives talk about efficiency and focusing on AI.
Here’s the detail that should concern you. In a recent round of Microsoft layoffs in Washington state, software engineering was the single largest job category cut, more than 40% of the positions eliminated. Not marketing. Not HR. Engineers.
The pattern is straightforward. Companies lay off customer support staff and hire engineers to build AI chatbots. Then they lay off junior engineers because the AI can write boilerplate code. The technology you are building is designed to make you unnecessary. That is not a conspiracy theory. It is the explicit business model.
I understand why this is hard to accept. You’re smart. You’ve been rewarded for being smart. The system has told you that your intelligence makes you valuable, maybe even irreplaceable. But that’s exactly what the system told factory workers before automation, and taxi drivers before Uber, and journalists before the internet hollowed out their industry.
The wealth generated by technological disruption does not automatically flow to the workers who built it. It flows to the people who own the capital. This isn’t ideology. It’s the historical pattern. The International Monetary Fund projects that AI will likely worsen overall inequality. A recent UN report found that 40% of jobs could be affected, and the benefits of automation tend to favor capital over labor.
Every technological revolution in modern history has followed this trajectory. The gains eventually distributed more broadly, but only after decades of political struggle. The market did not share the proceeds voluntarily. People organized and fought for them.
If you’re a libertarian, you might object that this framing is biased against markets. But think about what’s actually happening. A small number of companies, controlled by a small number of people, are building systems that will reshape the economy and concentrate unprecedented power. These are not decentralized, permissionless technologies. They are centralized platforms controlled by executives who are accountable to no one except their shareholders.
If you’re worried about government overreach, you should be equally worried about private power that is less transparent, less accountable, and arguably more capable of shaping how people think and act. The platforms already control the information environment. AI gives them tools to manipulate it at scale.
I’m not arguing against technology. I’m arguing against faith.
The problems facing humanity right now, climate change, pandemic risk, institutional decay, are not waiting for a messiah. We already know how to address climate change. The barriers are political, not technological. We knew how to prepare for pandemics before COVID. We chose not to. These are problems of coordination, distribution, and political will. A god-in-a-box is not coming to solve them for us.
The tech billionaires are selling you a story where you don’t have to do the hard work. You just have to wait for the breakthrough. Trust the visionaries. Keep the faith.
But the breakthrough might not come. Fusion has been twenty years away for seventy years. Flying cars have been coming since 1917. Mars is still a frozen, irradiated, toxic rock with no magnetic field, and no amount of CGI renders changes the physics.
The bottle might be empty.
And if it is, the question becomes what we’re going to do about the actual problems we face, using the tools we actually have, in the time we actually have left. That’s harder than waiting for a miracle. But it’s the only thing that has ever worked.


Scott, many of us feel exactly the same way as you do. The challenge here is that money, power and cognitive dissonance rule. We live in a world where technology, science and knowledge are exceedingly available yet, facts are no longer facts. The earth is flat on Tuesday. Even if we choose to live as a “libertarian” we are certainly impacted by the consequences of the decisions of others. No seatbelt for me but the flying body is what killed the passengers in the other car. It pains me to read this because it pains me in general. What world are we advancing? AI is a lot of cool fun stuff AND bologna at the same time. I use it where it may be useful but it can quickly become a burden because of the lack of facts espoused. In this world where lies are just the norm, I don’t know how to approach this problem without potentially causing more harm.