Where US Policy on AI May Take Society
The architects of America’s AI policy believe they are building the future. Marc Andreessen, whose Techno Optimist Manifesto declares that technology must be “a violent assault on the forces of the unknown,” has spent roughly half his time since the election advising President Trump’s team on technology and business. David Sacks, Trump’s appointed AI and crypto czar and a fellow venture capitalist from the PayPal Mafia, holds ethics waivers that critics say grant him unusual latitude to shape federal policy while retaining his tech investments. Together, they have convinced the administration that the greatest threat to America is not the social disruption AI might cause but the possibility that state regulations or safety requirements might slow down deployment.
Their executive orders tell the story clearly. On December 11, 2025, Trump signed an order instructing the Justice Department to set up a new task force to challenge state AI laws and directing federal agencies to circumvent what the administration calls “onerous” state and local regulations. On July 23, he signed an executive order to accelerate permitting for large AI data centers, establishing environmental waivers and directing federal agencies to identify federal land for construction. The administration rescinded Biden’s executive order on AI safety within hours of taking office on January 23 and replaced it with an order focused on “removing barriers to American leadership in artificial intelligence.” When defending these policies, Sacks argues that “a 50-state patchwork is a startup killer” and that “what we need is a single federal or national framework for AI regulation.”
What these men apparently fail to grasp is that the monster they are creating has less to do with the technology itself than with the social and political dynamics their policies will unleash. They see only the upside of rapid deployment and dismiss concerns about displacement, inequality, and democratic stability as the complaints of pessimists infected with what Andreessen calls “deceleration” thinking. Andreessen has argued that those who believe in regulating AI due to perceived existential risks are part of a “cult,” and that “any deceleration of AI will cost lives” because beneficial applications will be delayed. They have constructed an ideological framework where questioning the pace or direction of AI deployment marks you as an enemy of progress. In their telling, societies are like sharks that must grow or die, and any friction in the path of technological acceleration is an existential threat to American competitiveness.
The worst backlash scenarios are precisely the ones this approach makes more likely. When AI becomes widely blamed for job loss, higher bills, and information chaos, populist movements will ride that anger into power on explicitly anti AI platforms. The early signs are already visible. In Ireland and central Mexico, communities have mounted protests against data center construction, objecting to the drain on water and electricity supplies that the Trump administration now seeks to fast track through environmental waivers. These local battles over resources could escalate into national referenda or even secessionist movements that frame AI infrastructure as a modern form of colonial extraction. Voter anger about electricity prices linked to data center growth has already emerged as a political liability in states like Virginia and New Jersey.
Political parties could campaign on shutting down data centers, banning entire categories of AI applications, and weaponizing regulation to punish specific technology firms. AI would join climate change and immigration as a permanent fault line in the culture wars, impossible to discuss rationally because every position signals tribal identity. In the extreme version of this trajectory, AI becomes such a potent symbol of elite indifference that trust in all technocratic institutions erodes. Central banks, regulatory agencies, and courts all become suspect. Any policy that relies on expertise gets dismissed as captured by what critics will call “the AI lobby,” which in this case would not be metaphorical given that actual venture capitalists with direct financial stakes are writing the policies.
The labor dimension follows an equally dark path that Andreessen and his allies seem unable to imagine. The visible pattern now is thousands of layoffs attributed directly to AI, concentrated in entry level white collar roles, while wages stagnate and productivity gains flow primarily to shareholders. Andreessen’s manifesto rejects the very concept of universal basic income and insists that technology will create more meaningful work, but it offers no mechanism for how displaced workers transition or what happens during the interval when old jobs disappear faster than new ones appear.
If this continues, the response will not be limited to targeted strikes. Unions and informal worker coalitions could move toward rolling general strikes across logistics, media, education, and public services. The Hollywood writers’ strike of 2023 already demonstrated how unions can secure binding contractual limits on AI use when they organize effectively, setting a template that other sectors may follow. The feeling among younger and midcareer workers would be not just anxiety about the future but active betrayal by a system that promised mobility through education and effort while the architects of that betrayal receive special permission to profit from the very policies causing the displacement.
Sabotage offers an even darker indicator. Anecdotal reports already describe workers deliberately undermining AI systems in warehouses and robotics facilities. If such resistance becomes widespread, employers will crack down, which in turn will further radicalize workers. History provides uncomfortable precedents. Concentrated technological unemployment combined with weak social safety nets has correlated with spikes in suicide, drug abuse, and recruitment into extremist movements. Smooth retraining programs remain mostly theoretical. The actual experience has been displacement, resentment, and a cohort of people who conclude the system has no place for them and therefore deserves to be torn down.
Economic nationalism adds another layer of fragmentation that the techno optimists have not considered. If AI becomes synonymous with the offshoring of white collar work and the hollowing out of regional economies, governments will respond with aggressive techno protectionism. Heavy tariffs on AI services, restrictions on compute exports, and mandatory “human first” quotas for certain industries would fragment global AI supply chains. The very “single national framework” that Sacks and Andreessen demand domestically could trigger retaliatory measures internationally, setting off tech trade wars that slow economic expansion and reduce cooperation on shared risks.
That fragmentation makes it nearly impossible to coordinate on safety standards for the most powerful systems, raising the likelihood that some actors race ahead with minimal oversight, which is precisely the scenario the current deregulatory approach creates. When Andreessen dismisses AI safety concerns as coming from people who belong to what he calls a “cult,” he is not just attacking his critics. He is dismantling the possibility of coordinated governance before the risks fully materialize.
Perhaps the most insidious dimension involves the collapse of shared epistemology. If AI generated content and personalized manipulation become ubiquitous, large segments of the population may simply abandon the effort to maintain common ground on facts. Society fractures into hardened epistemic tribes where each group trusts only its own curated newsletters, local religious or political leaders, and closed messaging channels. Everything mediated by AI gets treated as propaganda.
The irony is rich. Andreessen and Sacks spend considerable energy warning about “woke AI” and “censorship” by Big Tech, framing content moderation as ideological bias smuggled into training data. Their solution is not neutral AI but rather AI trained according to different ideological preferences, which they frame as “truth seeking.” What they cannot seem to envision is a world where their crusade against what they call AI censorship contributes to a permanent fracture in the public sphere. Collective action on any issue becomes nearly impossible because there is no longer consensus about basic reality, including election results.
In this environment, bad actors can exploit AI tools to generate tailored disinformation cheaply and at scale. The backlash becomes a permanent wound. Trust never recovers, even if the technology improves, because the damage is not to the tools themselves but to the shared institutions and norms required to evaluate truth claims together. The very velocity that Andreessen celebrates as creative destruction ensures there is no time for society to build new norms before the old ones collapse.
The regulatory response could make things worse rather than better. A backlash that protects incumbents while freezing out everyone else is entirely plausible, and the current administration’s approach makes this more likely, not less. Ethics expert Kathleen Clark has described Sacks’ waivers as “like a presidential pardon in advance” that essentially say “go ahead and take action that would ordinarily violate the criminal conflict of interest statute, we won’t prosecute you for it.” When people in this position shape federal frameworks while holding substantial AI investments, the result will not be a level playing field. It will be regulatory capture that serves those with access to power.
Wealthy individuals and large corporations will retain privileged access to powerful systems behind corporate gates while lobbying for strict controls that limit open source models, small competitors, and grassroots experimentation. Ordinary users will see only heavily constrained, low quality AI and remain exposed to exploitative pricing and pervasive surveillance because they lack the leverage to demand better. This produces a world where AI deepens class divisions but innovation is also slowed and centralized. It is the worst of both outcomes.
The common thread in all these scenarios is that AI comes to be experienced primarily as extraction and control rather than empowerment. The technology may deliver genuine benefits to some, particularly those who, like Andreessen and Sacks, hold equity positions in the companies deploying it. But if the lived reality for most people is job insecurity, higher costs, information chaos, and a sense that billionaire venture capitalists are writing policies to protect their portfolios while everyone else absorbs the downside, then these darker trajectories become not just possible but likely.
The danger is not that Andreessen and Sacks are malicious. The danger is that they are true believers in an ideology that cannot imagine its own failure modes. They have convinced themselves and the President that any attempt to slow down, consider second order effects, or build democratic guardrails is tantamount to surrender in a race with China. The manifesto quotes Marinetti, the Italian Futurist who later helped author the Fascist Manifesto, celebrating technology as a “violent assault” that must “force the unknown to bow before man.” This is not the language of people who are thinking carefully about unintended consequences.
When Steve Bannon lobbies against Sacks’ policies at Mar a Lago and calls for a pause on AI labs’ pursuit of superintelligence until risks are better understood, it signals how far outside normal risk assessment the current approach has drifted. Even within the MAGA movement, there is recognition that unchecked acceleration poses dangers. But Sacks and Andreessen have captured the President’s attention with promises of economic dominance and warnings about “woke” content moderation, and so the apparatus of government mobilizes to remove barriers to deployment rather than to manage the transition.
The monster being created is not artificial intelligence itself. The monster is a social and political backlash so severe that it makes democratic governance impossible. By the time the architects of this policy recognize the pattern, the damage to institutions and social trust may already be irreversible. The backlash will have become the story, overshadowing whatever benefits the technology might have delivered under different circumstances. And the venture capitalists who spent half their time advising the President on how to unleash AI without friction will discover that markets require stable societies, that growth requires trust, and that societies are not in fact like sharks who must move forward or die. Sometimes what looks like deceleration is actually the work of building foundations strong enough to support the weight of what comes next.


Thanks, the 'startup killer' quote really highlights their priorities.