Is Your Job in AI Morally Defensible?
I have spent all morning asking a question that nobody in the AI industry seems to want to hear, and even fewer want to answer honestly. Is a job in AI morally defensible? Not AI in the abstract. Not some future version of the technology that might do wonderful things for humanity. Your specific job. Right now. The specific thing you get paid to build, the specific product your labor helps bring into existence, and the specific consequences that product will have for real people living real lives in a world where no government is coming to help them.
Before I explain where I landed, I need to define what I mean by morally defensible, because the phrase carries weight that shifts depending on who is using it. In this context, I mean something specific. A job in AI is morally defensible if the technology you are building is designed to work alongside human beings, enhancing their capabilities, extending their reach, helping them do things they could not do alone, while preserving their economic participation in society. It is morally indefensible if the technology is designed to work instead of human beings, to eliminate their labor, to render them economically unnecessary, to convert their livelihoods into line items on a quarterly earnings report. The distinction is not subtle. It is the difference between building a tool that helps a doctor spot a tumor she might have missed and building a system that replaces the customer service representative who answers the phone when someone's insurance claim gets denied. One makes a human being more powerful. The other makes a human being unnecessary.
This is not a hypothetical distinction. It maps directly onto what is happening right now across the AI industry, and the data tells a story that should trouble anyone paying attention.
What made me think about this in the first place was a comment someone posted in a WhatsApp group I belong to. They mentioned something called the Cantillon Effect, an economic concept named after Richard Cantillon, an eighteenth century economist who noticed that when new money enters an economy, it does not arrive equally. The people closest to the source of new money benefit first, at old prices, before inflation ripples outward and erodes the purchasing power of everyone further from the spigot. By the time new wealth reaches ordinary people, prices have already adjusted upward, and the real benefit has been captured by those who got there first. The commenter argued that this applies directly to AI. He said, “We all may benefit from Al, but those that have it first will benefit the most.” We are watching the largest Cantillon Effect in history unfold in real time as trillions in capital flow first to Silicon Valley insiders, compute providers, and early employees at funded startups, while the people whose jobs are being eliminated receive nothing except a vague promise that the economy will adjust.
That comment sat with me all night after reading it, because I had always operated under the assumption that building useful things was inherently good, that technology creates more than it destroys, that progress is worth the disruption. But the Cantillon lens forced me to ask a harder question. Who benefits first from what I build, and who pays the cost? And is the gap between those two groups something I can live with?
To answer that question honestly, I had to look at what the AI industry actually does, not what it says it does in press releases and keynote speeches, but where the money goes, where the employees sit, and what the products are designed to accomplish.
The numbers are not encouraging.
Menlo Ventures published a comprehensive analysis of enterprise generative AI spending in 2025, based on a survey of nearly 500 U.S. enterprise decision makers. Enterprises spent 37 billion dollars on generative AI. Of the 19 billion that went to user facing applications, the single largest category was coding tools at 4 billion dollars, representing 55 percent of all departmental AI spending. The rest of departmental spending went to IT operations at 700 million, marketing at 660 million, customer success (a polite way of saying customer service replacement) at 630 million, design at 510 million, and HR and recruiting at 365 million. Vertical AI, meaning tools built for specific industries, captured 3.5 billion, with healthcare leading at 1.5 billion, followed by legal at 650 million, creator tools at 360 million, and government at 350 million. The remaining 8.4 billion went to horizontal copilots, the general purpose productivity tools like ChatGPT Enterprise and Claude for Work.
Now look at those numbers through the moral lens I described. Healthcare AI, arguably the most clearly defensible application of the technology, captured 1.5 billion out of 19 billion in application spending. That is less than eight percent. Coding tools, which are increasingly marketed not as programmer assistants but as programmer replacements (76 percent of enterprise decision makers say they will need to hire fewer software developers, and 41 percent specifically target junior engineers: from Vention), captured more than double that amount. Customer success tools, which represent the automation of the very human act of helping another person solve a problem, captured 630 million. HR automation, which replaces the human judgment involved in deciding whether someone gets a job, captured 365 million.
The pattern becomes even clearer when you look at what AI is actually being used for, rather than what it is sold as. Anthropic, the company that builds Claude, published something called the Economic Index, which analyzed roughly a million real conversations with their AI to understand how people and companies are using the technology. At the individual level, the news is not entirely grim. About 57 percent of individual Claude usage patterns represent augmentation, meaning people using AI to help them do their own work better. The remaining 43 percent represents automation, meaning AI performing tasks that a human would otherwise do. But here is the number that should keep you up at night. When you look at enterprise API usage, how companies actually deploy AI in their products and operations at scale, 97 percent follows automation dominant patterns. Not augmentation. Automation. The machines are not helping humans work better. They are replacing humans entirely, and they are doing it at scale, behind APIs, in production systems, without anyone writing a press release about it.
Anthropic's data also reveals which occupations are most affected. Software development and computer science occupations account for 37.2 percent of all AI usage. Arts, design, media, and writing account for 10.3 percent. Business and financial operations account for 8.1 percent. Education and library functions account for 6.5 percent. The AI industry's own data shows that the technology is overwhelmingly being used to automate knowledge work, not to cure diseases or model climate change or predict earthquakes.
McKinsey's 2025 global survey on AI confirms this trajectory. When asked which business functions would see decreased headcount as a result of AI, executives pointed first to service operations (customer care and field services), then to supply chain and inventory management, then to HR, then to manufacturing. A median of 17 percent of respondents reported workforce reductions in the past year due to AI, but 30 percent expect reductions in the coming year. The trend line points in one direction.
The real world examples are already piling up. Klarna, the buy now pay later company, publicly announced that AI was doing the work of 700 customer service agents. Salesforce cut 4,000 positions from its customer support workforce. BT, the British telecom giant, announced plans to eliminate up to 55,000 jobs, with 10,000 replaced directly by AI. Dukaan, an Indian startup, fired 90 percent of its support staff after deploying a chatbot. Pinterest announced plans to lay off 15 percent of its human workforce in 2026 and redirect the savings to AI. Amazon cut 30,000 corporate positions across 2025 and early 2026. These are not projections or warnings. They are things that have already happened.
Against this backdrop, the morally defensible work in AI looks like a rounding error. AI safety and alignment research, the work of ensuring that increasingly powerful AI systems behave as intended and do not cause catastrophic harm, employs approximately 1,100 full time equivalent workers globally. That number comes from a detailed census published on the Effective Altruism Forum and LessWrong, which identified 70 organizations and approximately 600 technical and 500 nontechnical safety researchers worldwide. To put 1,100 people in perspective, Google DeepMind alone employs 6,000 people. OpenAI employs roughly 4,000. Anthropic employs about 2,300. The entire global safety workforce would not fill a single floor of a major AI lab. And the gap is widening, not closing. Safety organizations grow at about 21 percent annually. Capabilities organizations grow at 30 to 40 percent.
Medical AI, precision agriculture, climate modeling, accessibility technologies, disaster prediction, translation tools, educational tutoring systems, and scientific research acceleration all represent genuinely important work. But collectively, by my best estimate from triangulating enterprise spending data, job creation statistics, and workforce surveys, these defensible categories account for somewhere between 20 and 30 percent of the AI workforce and its associated spending. And even within those categories, the line between augmentation and replacement is blurry. Healthcare AI spending includes administrative automation that eliminates billing clerks. Educational AI includes platforms marketed as teacher replacements. The defensible work exists, but it is not where the center of gravity sits.
The remaining 60 to 75 percent of the industry is building what I would classify as replacement technology. Customer service chatbots. Automated journalism systems. Self driving vehicles designed to eliminate millions of driving jobs. Retail automation. Fast food automation. Legal and accounting automation positioned not as tools to help professionals work better but as systems to make junior professionals unnecessary. Telemarketing bots. AI art generators marketed as replacements for illustrators and graphic designers (a quarter of illustrators already report losing work to AI). HR screening systems that eliminate recruiters. Warehouse picking robots. Robo advisors replacing financial planners. And above all, coding automation tools, which now represent the single largest category of AI spending and which are increasingly sold on the explicit promise that companies will need fewer programmers. The irony of that last category: The largest cohort of AI workers is building tools designed to eliminate AI workers.
This matters because of the world in which it is happening. In December 2025, President Trump signed an executive order titled "Ensuring a National Policy Framework for Artificial Intelligence" that establishes a "minimally burdensome" national standard for AI regulation and directs the Department of Justice to challenge state laws attempting to address algorithmic discrimination or other harms. The order explicitly targets state level protections as impediments to American AI dominance and frames any attempt to regulate bias or require transparency as "embedding ideological bias." States that do not comply risk losing federal broadband funding. The message is clear. There will be no meaningful regulation of AI's impact on workers. Not from this administration, and not from any state government that wants to keep its federal funding.
China, the other major AI power, combines massive state investment with governance mechanisms that require ethics reviews, content labeling, and alignment with "core socialist values." This sounds like oversight until you realize that it means AI systems must reinforce party ideology, that data the Communist Party deems politically sensitive must be excluded from training, and that the entire framework prioritizes state control over worker protection. China's approach is not a model for responsible AI governance. It is a different flavor of the same problem.
The European Union has the AI Act, which is the most comprehensive regulatory framework in existence, but it remains focused primarily on high risk use cases and does not meaningfully address the labor displacement question. No major government anywhere on earth has implemented anything resembling an adequate response to the scale of job displacement that AI is already causing and will increasingly cause. No retraining programs worth the name. No portable benefits. No adjustment assistance that matches the speed and scale of the disruption. The Wall Street Journal editorial board, hardly a bastion of progressive economic thinking, published a piece in February 2026 titled "Government Won't Help the AI Job Transition" arguing that past government attempts to ease workforce transitions have largely failed and increased costs for society overall.
So here we are. The technology is designed overwhelmingly to replace rather than augment. The governments that could manage the transition are choosing not to. The wealth generated by AI is flowing, precisely as Cantillon would have predicted, to those closest to the source of capital, while the costs are borne by those furthest from it. Customer service agents in Manila. Junior lawyers in Chicago. Graphic designers in Sao Paulo. Truck drivers everywhere.
I want to be fair to the counterargument, because it is not entirely wrong. The "someone else will build it if I don't" reasoning contains a factual truth. AI development will continue regardless of any individual's career choices. Blanket abstention from the field cedes it entirely to people who care nothing about human welfare. There is a version of the argument that says thoughtful people must remain inside the tent, building safety tools, pushing for responsible development, designing augmentation rather than replacement. I take that argument seriously. I think it applies to the 1,100 safety researchers, to the people building medical diagnostic tools, to the teams working on accessibility technology, to the engineers building climate models. Those people are doing essential work, and the world needs more of them.
But if your specific job is to build tools designed to replace humans rather than augment them, then I do not think this argument saves you. You are not shaping the future of AI from the inside. You are building the specific weapon that will be aimed at specific people who have no alternative and no recourse. The abstract possibility that AI might eventually create new jobs to replace the ones it eliminates does not help the 55 year old customer service representative who just got fired and has a mortgage payment due in three weeks. The theoretical long run benefit does not help the freelance illustrator whose client just replaced her with Midjourney. The promise of future retraining programs does not help anyone when those programs do not exist and the current government is actively dismantling the regulatory infrastructure that might have created them.
I keep returning to the Cantillon Effect because it captures something essential about this moment. New money, new technology, new capability, it never enters the system evenly. Those closest to the source benefit first and most. Those furthest away pay the price. In the original Cantillon observation, Spanish merchants near the ports got rich while inland peasants got poorer. In the AI version, the venture capitalists, the early employees with equity, the compute providers, and the executives who can report labor cost savings to their boards get rich. The people whose labor is being automated get nothing except the opportunity to compete for fewer and worse jobs.
The question every person working in AI needs to ask themselves is simple enough. Is the thing I am building designed to help people do their work better, or is it designed to make people unnecessary? Am I creating tools that extend human capability, or am I building systems that concentrate wealth upward while destroying livelihoods downward? Am I doing humanity a service, or am I helping to destroy it for a paycheck?
The honest answer, for the majority of people working in AI today, is uncomfortable. The industry does not like to frame it this way. It prefers words like efficiency, optimization, and scale. It talks about freeing people from drudgery and unlocking human potential. When 97 percent of enterprise API usage follows automation dominant patterns, when the single largest spending category is tools to reduce headcount, when the executives surveyed by McKinsey say service operations is where they plan to cut people first, the language of liberation starts to sound like the language of something else entirely.
I do not have a clean answer for what to do with this realization. I can tell you what I think. I think every person working in AI should look honestly at the product they are building and ask whether it passes the test. Does this help people, or does this replace people? And if the answer is that it replaces people, in an era when no institution is prepared to catch the people who fall, then you should understand clearly what you are participating in. You may decide the paycheck is worth it. You may decide that the abstract future benefits outweigh the concrete present harms. But you should not pretend the question does not exist, and you should not let the industry's marketing language obscure what is actually happening.
The 1,100 people working on AI safety are doing morally defensible work. The researchers building diagnostic tools that help doctors catch cancer earlier are doing morally defensible work. The engineers building screen readers and communication devices for disabled people are doing morally defensible work. The scientists using AI to model protein folding and climate systems and earthquake prediction are doing morally defensible work.
The rest of us owe ourselves an honest accounting.

