Guided Missiles, Misguided Minds
“Our scientific power has outrun our spiritual power. We have guided missiles and misguided men.” - Martin Luther King Jr
Something feels deeply wrong, and many people can sense it. We are racing to build artificial intelligence that consumes enormous energy and water and that can manipulate opinion, replace work, and militarize decision making. At the same time, ordinary people feel more precarious, not less. That tension is not an accident. It comes from how our civilization is wired to think about winning, safety, and meaning.
At the surface level, the story sounds rational. More intelligent systems will grow the economy, cure disease, and keep countries secure. People are amazed at what a prompt can return. Leaders talk about competitiveness and national security. Companies talk about productivity and innovation. But beneath that language lies an older, more primitive script. What matters most is not that everyone is safe enough. What matters is who is ahead of whom. States want to be ahead of rival states. Firms want to stay ahead of rivals. Executives want to be ahead of their peers. Even if the game is slowly burning the floor we are all standing on, no one wants to be the first to stop playing.
When you look closer, the drive is not just to survive. It is to dominate. Artificial intelligence looks like the perfect tool for this. Whoever has the best systems can move money faster, shape stories faster, target weapons faster, and cut labor costs faster. Saying no to that tool feels like choosing weakness in a world that punishes weakness. So even people who quietly fear where this will end up feel they cannot afford to restrain themselves.
There is another layer that concerns control. The world feels chaotic, with climate events, political breakdowns, and new diseases. People who run large systems are terrified of losing control. Artificial intelligence is sold as a way to make chaos legible. Predict the market. Predict crime. Predict what voters will think. Predict what patients will need. The promise is not only new wealth. It is the soothing idea that a clever enough system can manage the future. At a psychological level, that promise is intoxicating. The more frightening reality becomes, the more attractive the fantasy of total control.
This hunger for control through technology is the same force driving the turn toward authoritarianism that the world is experiencing right now. It is not a coincidence that AI acceleration and authoritarian governance are rising in tandem. Both appeal to the same deep anxiety about chaos and unpredictability. Both promise to replace messy human deliberation with efficient systems that can be trusted to make decisions faster and with less contestation. Authoritarianism offers control through surveillance, centralized decision making, and the elimination of resistance. Artificial intelligence provides control through prediction, optimization, and automation. They are the same impulse wearing different clothes. A leader who fears democratic disorder naturally wants AI to monitor dissent, manage information flows, and anticipate threats before they form. A corporation that fears labor unrest or market volatility naturally wants AI to eliminate human unpredictability from supply chains and decision making. The technologies and the political systems are not competing. They are reinforcing each other. This is why I think and write so much about both subjects. As economic precarity rises and people feel less secure, they become more willing to trade freedom for the promise of order. As that willingness grows, leaders and firms have a stronger justification for investing in surveillance systems and automated controls. Artificial intelligence is not neutral infrastructure. It is the nervous system that makes totalizing control technically possible. Authoritarianism is not a separate phenomenon. It is the political expression of the same refusal to accept uncertainty and human agency that produces the push for AI.
Under all of this sits what you could call the religion of more. Modern societies treat endless growth as natural and necessary. If an innovation raises output, it is good by definition. Very few decision makers are structurally rewarded for saying “enough.” Artificial intelligence is growth concentrated. It offers more output per unit of human life, greater speed, and higher throughput. To question the project seriously would mean questioning the growth story that justifies most existing institutions. For many people in power, that is unthinkable. It would mean admitting that the operating system of the last century is failing.
At an even more uncomfortable depth, there is the human refusal to accept vulnerability. Bodies age and fail. Minds forget. Cultures crumble. The dream of superintelligence is also a dream of escape. Maybe a machine can solve aging. Perhaps it can upload memories. Maybe it can watch the world for threats while we sleep. These are not only technical fantasies. They are spiritual ones in a disenchanted age. To step back from the edge of artificial intelligence would feel like walking away from the last big chance to outrun our own fragility.
All of this could still be corrected if responsibility were clear and direct. It is not. The harms are spread thin and over time. Workers lose security one industry at a time. Local communities lose water slowly. Political systems degrade gradually as trust in information becomes harder to place (reference, ‘You Wont Feel a Thing’). Benefits and control sit in a much smaller circle: investors, executives, senior officials. Each person inside that circle can tell a story in which they are not really in charge. An engineer says someone else will build it anyway. A minister says another country will push ahead anyway. A chief executive says investors will punish restraint anyway. Responsibility dissolves as it moves through the system.
So when you ask why humanity is pushing so hard on something that could poison the environment, hollow out demand, and supercharge conflict, the deepest answer is painful. We built institutions that maximize advantage and expansion without building equally strong mechanisms for long term care and restraint. We trained generations of leaders to pursue growth at all costs and status, and to treat the downstream damage as somebody else's problem in some future year. Artificial intelligence is not a deviation from that pattern. It is its purest expression.
If you are working on an artificial intelligence project today, ask yourself “Am I helping a system get better at treating people and the planet as expendable, or am I actually helping it learn to stop doing that”
The real WTF question for the rest of us is not only about what this technology does, but what it reveals. A culture that cannot choose sufficiency over escalation will eventually create tools that make its contradictions impossible to ignore. Artificial intelligence is one of those tools. It exposes the gap between what we say we value and what our systems actually reward. For example, a hospital can advertise an AI system that will make care more efficient and fair, yet the first thing it does is deny treatment to the poorest people more quickly and more systematically. Until that gap is faced honestly, no technical fix will save us from the logic that produced this moment.

