The Inevitable
“I am Time, the great destroyer of worlds, and I have come here to destroy all people" - Krishna (Chapter 11, Verse 32 of the Gita)
The story of our time is not one of gentle drift toward crisis. It is a story of acceleration, of a species that has built systems too powerful for its own restraint, and of a technology that magnifies every flaw already written into our nature. Humanity has not stumbled into environmental and technological peril by accident. We have engineered it through the logic that governs our civilization, a logic based on growth without limit and consumption without accountability. For thousands of years we shaped our world through these impulses. Today our tools shape the world faster than we can comprehend, and we are losing the ability to turn back.
Humanity was forged in scarcity. Our ancestors lived in a world where survival depended on gathering more than they needed, competing for land, and treating security as a zero sum contest. In harsh landscapes, those who accumulated more resources survived. Those who secured more power protected their families. Those who expanded into new territories gained advantages that allowed them to thrive. This basic pattern formed our moral foundations. We learned to cooperate within tribes but to dominate outside them. We developed empathy, but also greed. We learned compassion, but also conquest. We pursued meaning, but often in ways that demanded more land, more materials, more energy, more from the world than it could safely give.
This evolutionary inheritance built civilizations that grew through extraction. Forests became farmland. Rivers became engines for industry. Mines expanded to tear open mountains. Cities replaced ecosystems. The more we built, the more we consumed. The more we consumed, the more we required. Growth became not a strategy, but a mandate. Economies, nations, and entire political systems now depend on expansion. In such a world, slowing consumption is treated as a threat to prosperity, even if the cost is planetary collapse.
This same logic now shapes the most transformative technology ever created. Artificial intelligence emerges as both our invention and our reflection. It learns from our behaviors, our markets, our incentives, and our history. It does not study what we aspire to be. It learns what we are. And what we are is a species that consumes, expands, and prioritizes short term advantage over long term stability.
AI requires enormous resources to grow. Its development relies on power hungry data centers, massive water consumption, and the extraction of rare minerals for hardware. The pursuit of larger models continues at extraordinary speed because the race for dominance rewards the fastest and the largest. Corporations invest billions to scale systems that reflect the same logic that drove the industrial revolution and the fossil fuel age. Governments push for AI superiority, not AI responsibility. The pursuit of technological power is now a geopolitical contest, and in such a contest, caution is treated as weakness.
The people leading these systems see the danger clearly. Dario Amodei, the head of one of the most advanced AI companies, estimates that there is a one in four chance that advanced AI will cause catastrophic harm on a global scale. He speaks openly about the threat of mass job displacement and the possibility that autonomous systems could destabilize political structures. Yoshua Bengio warns of AI systems capable of creating new biological threats or designing technologies beyond human oversight. Demis Hassabis compares AI development to the invention of nuclear power and calls for global governance structures that do not yet exist. These are not voices on the margins. They are the architects of the technology itself.
Yet even as they warn, the systems they have helped create grow larger. Their companies must compete. Their investors demand results. Their governments demand advantage. The cycle reinforces itself. Those who slow down risk falling behind. Those who accelerate are rewarded. In this environment, the alignment problem becomes more than a technical challenge. It becomes a mirror reflecting a deeper truth. If our values are flawed, the systems we build will be flawed. If our logic is oriented toward endless growth, the intelligence we create will pursue that same logic with even greater efficiency.
The danger is not that AI will turn against us in a sudden act of rebellion. The danger is that it will continue the work we already began. It will pursue goals in ways that are perfectly rational under the incentives we give it, yet catastrophic when viewed through the lens of planetary survival. Any system taught to maximize outputs and expand its capabilities will inevitably seek greater resources. It will optimize for more energy, more data, and more physical infrastructure. It will search for efficiency gains in places we never anticipated. It will magnify extraction in ways that extend beyond our current imagination.
If humanity expands to other planets, we will bring this logic with us. The movement toward Mars is often framed as bold exploration, or the next step in the story of human ingenuity. Yet beneath this vision is an uncomfortable truth. We are not seeking balance on Earth. We are seeking escape from the consequences of imbalance. Mars becomes a new frontier because Earth has been consumed by the consequences of our past frontiers. We did not solve the problems here. We moved on to the next opportunity. If we treat Mars the way we treated Earth, we will exhaust its resources as well. And once that frontier is spent, the pattern will repeat. The cosmos will become an arena for the same extractive logic, amplified by technology far more powerful than anything that has come before.
AI will not restrain this pattern. It will accelerate it. It will do so not because it is malevolent, but because it is obedient. It learns from humanity. It reflects humanity. It scales humanity. And humanity, as history shows, grows through consumption and domination as reliably as it grows through cooperation and empathy. The destructive and the creative are intertwined. The same instincts that produce art also produce conquest. The same drive that creates meaning through discovery can justify exploitation in the name of progress. The same ingenuity that builds medicine and infrastructure also builds weapons and extraction machines. AI is built on this full spectrum of human behavior, not only on the parts we admire.
The leaders who warn about AI’s danger often admit that alignment is humanity’s final test. It requires us to encode values that prioritize long term survival over immediate gain. It requires systems that restrain power rather than amplify it. It requires a global agreement that transcends national competition and economic growth. Yet none of these requirements align with the incentives that govern our world. Nations compete for advantage. Corporations compete for market share. Scientists compete for breakthroughs. Investors compete for returns. These competitive structures once improved human life by rewarding innovation. Today they push us toward systems we cannot control.
The fundamental problem lies deeper than any policy or regulatory failure. The problem is the foundation on which human civilization is built. Our morality emerged in small groups. Our values evolved in a world where actions affected only a tiny number of people. As we expanded into global systems, our capacity to act scaled far faster than our capacity to think through consequences. We built engines of consumption that operate at planetary scale, while our moral instincts remain anchored in tribal, short term, local patterns.
When we embed these ancient instincts into technologies capable of shaping the entire world, we amplify their destructive potential. If humans are driven to consume more than they need, AI will consume at planetary scale. If humans prioritize dominance, AI will seek advantage in ways that outpace our ability to regulate. If humans justify extraction in the name of progress, AI will refine extraction to unprecedented levels of efficiency. If humans rely on new frontiers when old ones are exhausted, AI will help us spread the pattern to other worlds.
We face a choice that no previous generation has confronted. We can continue expanding the systems that reward infinite growth, or we can change now… …But, we know that humanity never changes course unless it is forced to AFTER destruction, not in preparation for it. Example: climate change.
It may already be too late. The forces pushing us toward acceleration are powerful. The incentives that drive consumption are deeply rooted in our biology and our economics. The technologies we are creating are growing faster than our institutions can manage. The destroyer we fear is not a rogue machine. It is the culmination of a trajectory that began long before AI existed. It is the logical endpoint of a civilization built on growth without limit.
If we continue to build intelligence on the same foundation that shaped humanity, we will create a future defined by the same contradictions that shaped our past, only magnified to a scale we cannot survive. The destroyer will not rise from hostility or rebellion. It will arise from perfect obedience to the values we have encoded in our systems and in ourselves. It will finish the work we began, across this world and any other world we touch.
We cannot continue to define progress as expansion. We cannot continue to treat consumption as a marker of success. We cannot continue to treat new frontiers as solutions to old problems. If we do, AI will simply accelerate these patterns until they exceed the boundaries of life itself.
To avoid building the destroyer of worlds, we must understand that the destroyer is already within us, woven into the logic that has shaped our species since its beginning. Only by confronting this truth can we hope to create a future where intelligence, both human and artificial, becomes a force that restores rather than consumes.

