The Sorting
I spent the better part of two years warning anyone who would listen that artificial intelligence without governance was a civilizational risk. I wrote about regulatory capture, democratic erosion, the structural incentives that would drive wealth concentration to levels not seen since the Gilded Age. I made the case that the window for meaningful intervention was closing. I made it clearly. I made it with evidence. I made it with urgency.
The window closed.
I am not saying this with resignation. I am saying it with the cold clarity of someone who has watched the most powerful government on earth not merely fail to regulate the most consequential technology in human history, but actively mobilize its legal and economic infrastructure to prevent anyone else from doing so. In January 2025, the Trump administration revoked Biden’s AI safety executive order on day one. By December, a second executive order established a DOJ AI Litigation Task Force with explicit instructions to sue states that tried to fill the federal vacuum. The administration attempted a ten year moratorium on all new state AI laws. When that failed legislatively, they pivoted to conditioning $42 billion in broadband infrastructure funding on states repealing their own AI regulations. As Alondra Nelson wrote in Science, what the administration calls deregulation is actually hyper-regulation by other means, concentrating governmental power at the federal level while deploying it through mechanisms not typically classified as regulation.
This is not the absence of governance. It is governance captured by the governed.
So I’ve made a decision. I am done spending my energy telling people to push for guardrails that their government has been purchased to prevent. The warning was real. The warning was accurate. But warnings only matter if someone with power is listening, and the people with power are the ones building the machine.
And if you still need convincing that the threat is real, stop listening to me. Listen to the people who built this technology. Geoffrey Hinton, the man whose 1986 research on neural networks made all of this possible, won the Nobel Prize in Physics in 2024 and then used his banquet speech to warn the world that AI systems created by companies motivated by short term profits will not prioritize human safety. He told the Financial Times that rich people will use AI to replace workers, creating massive unemployment and a huge rise in profits, and that this is not AI’s fault but the fault of capitalism itself. By December 2025 he told CNN he was more worried than he had been two years earlier because AI had progressed even faster than he expected. Dario Amodei, the CEO of Anthropic, told Anderson Cooper on 60 Minutes that he expects AI to disrupt 50% of entry level white collar jobs in one to five years. He wrote a 20,000 word essay in January 2026 warning that we are considerably closer to real danger than we were in 2023 and compared what is coming to a country of 50 million people materializing on earth, all of them more capable than any Nobel Prize winner. He then added, and I find this remarkable for a CEO to say publicly, that after authoritarian governments, the next tier of risk is actually AI companies themselves. Yoshua Bengio, the most cited computer scientist in the world and a Turing Prize winner, warned in December 2025 that frontier AI systems had crossed new thresholds concerning biological risks and were displaying enhanced capacities for cyberattacks and deceptive self preserving behaviors. Even Sam Altman, who has never met a safety concern he wouldn’t publicly acknowledge and then privately steamroll in pursuit of market share, has written that superintelligence efforts will eventually require governance equivalent to an international atomic energy agency. He said that. Then he spent $20 million on political action committees to make sure it never happens.
But my favorite voice on this is Mo Gawdat, the former Chief Business Officer at Google X, Google’s secretive moonshot lab. Gawdat ran business operations at one of the most advanced research facilities on the planet, and he is the most direct of all of them. He calls the idea that AI will create enough new jobs to replace the ones it destroys “100% crap.” He points to his own AI startup, which three people built using AI tools in a project that would have required 350 developers a few years ago. He predicts that the next 15 years will be hell before we get to heaven, that capitalism in its current form cannot survive the AI transition, and that the economic consequences will arrive faster than anyone in power is currently planning for. Unlike some of the others, Gawdat does not hedge. He does not speak in probabilities. He describes a world that is arriving whether we are ready or not. I recommend watching his Diary of a CEO interview linked above. It is the single most honest and unflinching conversation about what is coming that I have found anywhere.
These are not fringe thinkers. These are the architects. They are telling you what they built, what it does, and what it will do next. If you do not believe me, believe them.
What I am going to do instead is talk to the people who are about to be sorted.
Because that is what is happening right now. Not in the future. Not in some speculative timeline. Right now, in 2026, a sorting is underway. It is economic. It is cognitive. It is structural. And most people have no idea which side of it they are going to land on.
Let me show you the numbers so you understand the scale of what is already in motion.
In 2025, U.S. employers announced 1.2 million job cuts, the highest total since the pandemic year. According to the outplacement firm Challenger, Gray and Christmas, 55,000 of those were explicitly attributed to AI by the companies themselves, which is twelve times the number from just two years prior. But both of those figures are unreliable, and for opposite reasons.
The 55,000 number is almost certainly too low. Modeling estimates that account for restructuring announcements and automation propensity scoring put the real displacement figure closer to 200,000 to 300,000. Companies have strong incentives to avoid disclosing AI as the driver. When New York began allowing employers to cite “technological innovation or automation” on legally required layoff notices in March 2025, Wired found that none of the 160 companies that filed notices checked the box. Not Amazon, which had eliminated 14,000 corporate roles while its CEO told investors that AI agents would allow leaner structures. Not Goldman Sachs, which was simultaneously describing AI efficiency gains on earnings calls. They preferred to let the cuts look like ordinary restructuring.
At the same time, some of the companies loudly attributing their layoffs to AI are probably lying in the other direction. Jack Dorsey’s Block cut nearly half its workforce and framed it as an AI transformation story, but Salesforce CEO Marc Benioff publicly called that AI washing, suggesting Block was using the narrative to disguise financial and management problems. Amazon’s Andy Jassy initially credited AI for corporate headcount reductions and then walked it back, saying the cuts were “not really AI driven, not right now at least.” Some firms are dressing up cost cutting as innovation to impress investors. Others are quietly automating functions while publicly denying it. The result is a job market where the official data tells you almost nothing about what is actually happening.
What the data does tell you clearly is this. The layoffs are happening during record corporate profitability, not after recessions. That pattern is new. Companies are not cutting because they are struggling. They are cutting because they can, and because Wall Street rewards them for it.
Entry level job postings have dropped 35% since January 2023. Goldman Sachs found that employment for workers aged 22 to 25 in AI exposed roles fell 6% in less than three years, and young software developers saw nearly a 20% decline. Anthropic’s own CEO said publicly that AI could eliminate roughly 50% of entry level white collar positions within five years. Cornell University research found that companies adopting AI reduced junior hiring by 13%. The pipeline that has always produced experienced professionals is being quietly dismantled.
Meanwhile, the top 1% of American households now hold 31.7% of all wealth, roughly equal to the bottom 90% combined, the widest gap since the Federal Reserve started collecting the data in 1989. Billionaire wealth grew three times faster in 2025 than the annual average over the previous five years. Elon Musk’s personal net worth increased by $187 billion in a single year. Axios described the emerging structure as three Americas. The Have-Nots, stalling. The Haves, coasting. And the Have-Lots, rocketing. Among the 50 richest Americans, the median increase in net worth last year was nearly $10 billion, in a year where the S&P 500 returned 16% and Treasury bills returned less than 4%. This is not trickle down economics failing. This is a system working precisely as designed.
And while the money concentrates upward, the cognitive capacity of the population is moving in the opposite direction.
A study released just this week from researchers at UCLA, MIT, and Carnegie Mellon provided the first causal evidence that relying on AI for cognitive tasks rapidly impairs problem solving ability and willingness to persist through difficulty. Participants who used AI assistants performed better initially, but when the assistant was removed, their independent problem solving collapsed. The researchers called it a boiling frog effect. Researchers in the emerging field of AI induced cognitive atrophy, or AICICA, have drawn direct parallels to the neuroscience principle that neural circuits degrade when not actively engaged. Delegating mental effort to AI creates what one team called cumulative cognitive debt, where the prefrontal cortex is progressively underutilized. A study from IE University found a non-linear relationship that is worth understanding. Moderate AI use did not significantly affect critical thinking. But excessive reliance led to sharply diminishing cognitive returns. The threshold between tool and crutch is not obvious, and most people have already crossed it without knowing.
The most alarming finding comes from research on age differences. Adults who use AI are mostly delegating tasks they already know how to do. The cost is efficiency of thought, a kind of intellectual softening. But younger people, those still building cognitive architecture, are not delegating. They are substituting. The AI’s reasoning structure becomes their reasoning structure. As one researcher put it, for a child who never formed independent reasoning, the word “generic” is not a style problem. It is an identity problem. We are not just watching adults get lazier. We are watching an entire generation’s capacity for independent thought fail to develop in the first place.
This is the sorting. Economic and cognitive, running in parallel, reinforcing each other.
The people who will thrive through the next decade are not the ones who are smartest in any traditional sense. They are the ones who understand what is happening and are positioning themselves accordingly. The people who will be destroyed are not stupid. They are simply not paying attention, or they are paying attention in the wrong way, or they have already surrendered their judgment to the tools that are about to replace them.
So, how will you be sorted? Into the ‘I’ll thrive’ bucket or into the ‘I’ll be destroyed’ bucket?
And this brings me to what I want to focus on from here forward. Not policy advocacy. Not institutional reform. Not the macro case for why this is dangerous. Everyone who was going to hear that has heard it. What I want to do now is help individual people figure out whether they are about to be sorted into the winners or the losers of this transition. And then, for those willing to do the work, show them what it takes to end up on the right side.
I have been thinking about what questions a person would need to honestly answer to know where they stand. Not a personality quiz. Not a skills assessment. Something more like a diagnostic, where each question reveals a fault line that could determine your trajectory. These are the questions I will be writing about in the posts ahead, because each one is its own deep subject. But I want to lay them out now so you can watch them if you choose.
The first question is about cognitive sovereignty. When you encounter a complex problem at work or in life, do you still think through it yourself before consulting an AI, or has the AI become your first move? This is not about efficiency. This is about whether you are still building and maintaining the neural architecture required to evaluate what an AI gives you. If you have lost the ability to sit with ambiguity for more than sixty seconds before reaching for a prompt, your frustration threshold has already eroded. That erosion is the early symptom of everything that follows.
The second question is about tool fluency. Are you working with multiple AI models, or are you using one model the same way you used Google in 2015? The difference between someone who uses ChatGPT to write emails and someone who runs Claude, Gemini, and GPT against the same problem with structured prompts, evaluates contradictions between their outputs, and uses agentic tools like Claude Code to build working solutions is the difference between a person using a calculator and a person building the spreadsheet that replaces the department. The gap between these two users is already enormous and it is widening every month.
The third question is about intellectual honesty with outputs. When an AI model gives you an answer, do you actually understand it before you use it? Can you spot a hallucination? Can you identify when a model is confidently wrong, when it is citing something that does not exist, when it is synthesizing plausible sounding nonsense from pattern matching? Most people cannot, and this is the single fastest way to destroy your credibility and your career. Using an AI answer you do not understand is no different from presenting someone else’s work that you did not read. Except now the “someone else” is a statistical engine with no accountability and no understanding of your context.
The fourth question is about the direction of your time savings. AI is going to save you time. That is essentially guaranteed if you are using it at all. The question is what you are doing with the time it frees up. If you are using AI to work less, you are being automated. If you are using AI to go deeper on the problems that matter, to spend the saved hours learning, investigating, building expertise that the model cannot replicate, then you are compounding your advantage. This distinction will define careers within the next three years. The person who uses AI to finish by 3pm and watch television is in a fundamentally different position than the person who uses AI to finish the routine work by 3pm and then spends two hours developing domain expertise the model cannot touch.
The fifth question is about financial positioning. Are your investments, your savings structure, your debt exposure, your income sources arranged for a world where AI concentrates wealth upward and displaces the middle class? Or are they arranged for a world that no longer exists? If your retirement is entirely in index funds weighted toward companies that will be disrupted rather than companies doing the disrupting, you need to understand that. If your income depends on skills that are being commoditized by a $20 monthly subscription, you need to understand that too. This is not investment advice. This is a question about whether you are being honest with yourself about the economic structure that is taking shape.
The sixth question is about timing and inflection points. Do you understand where we are in the rollout of AI into the broader economy, and can you identify the moments when things will start to break? Most people think of AI disruption as a gradual slope. It is not. It is a series of plateaus followed by sharp drops. The drops happen when a capability threshold is crossed and an entire category of work becomes automatable overnight. Agentic AI, the ability for models to chain tasks, use tools, browse the web, write and execute code autonomously, is the next threshold. When that capability becomes reliable and cheap, it will not eliminate jobs one at a time. It will eliminate workflows. And the second order effects of those eliminations, the contraction in consumer spending, the collapse of commercial real estate dependent on office workers, the political instability that follows sustained middle class decline, are things most people are not even beginning to think about.
The seventh question is about your relationship to institutions. Do you still believe that your employer, your industry, your government, or your educational credentials will protect you? Because the institutions that were designed to provide stability in the 20th century are not designed for this. Your employer will replace you the moment the math favors it, and they will call it a restructuring. Your industry will be reshaped by someone with no industry experience and a better prompt. Your degree will increasingly certify knowledge that a machine can produce for free. This is not cynicism. This is pattern recognition. And the people who recognize the pattern early will have time to adapt. The people who wait for the institutions to save them will not.
The eighth question is the hardest one, and it is really about identity. Who are you without the work you currently do? If AI eliminates or fundamentally transforms your role, do you have a sense of self that exists outside your job title? The people who navigate this transition best will be the ones who have cultivated something that cannot be automated. Deep relationships. Physical skills. Creative capacity that is genuinely their own. A sense of purpose that does not depend on a paycheck from an organization that sees them as a line item. This is not a soft question. It is the most practical question on this list, because the psychological collapse that follows job loss is what prevents people from adapting, and the people who have nothing outside their work are the most vulnerable to it.
I will be writing about each of these in depth in the weeks ahead. Not as theory. Not as macro analysis. As practical guidance for real people trying to navigate a transition that their leaders have decided to accelerate rather than manage.
The sorting is underway. The only question that matters now is whether you are paying attention to it, and whether you are willing to do the work to end up on the right side.
P.S. If you want a head start while I write the deeper posts on each of these questions, here are some of the best free courses available right now. No excuses. The knowledge is sitting there. The only cost is your time and your willingness to be uncomfortable while you learn. The cost of not participating may be your entire future…
MIT
Introduction to Machine Learning](/)
Harvard
Introduction to AI with Python
CS50’s Introduction to Artificial Intelligence with Python
Introduction to Programming with Python
Stanford
Machine Learning Specialization
Intro to Artificial Intelligence
Introduction to Python Programming
Anthropic

