Skynet Doom has Given Way: 6 New (and more realistic) AI "Doom" Narratives
The "AI will kill everyone" crowd is finally simmering down as they cried wolf and trusted their imaginations. Instead, people are worried about jobs, weapons, and wealth inequality.
I hadn’t heard much from the “Doomers” for a while and so I was under the impression that this movement was dying out. I checked with my audiences on YouTube and X, and ran a poll. Now, my audience is preselected for techno-optimism so take this with a grain of salt, but it’s a bit of data to go along with vibes:
Now, the most salient comments from these threads revealed something I didn’t quite expect: Doom conversations are evolving, not dying out.
Top comment:
“Skynet doom is waning, but unemployment doom is stable” ~ @oliverd.1458
That quotation struck me as particularly insightful. So I copied all the comments from X and YouTube and asked Claude to distill the list down to the new forms of doom. Here’s what we got:
Economic Doom—Fear of job displacement, economic disruption, and how society will function
Power/Control Doom—Concerns about technofeudalism, corporate/elite control of AI, and loss of human agency
Societal Readiness Doom—Worry about lack of preparation and planning for AI transition
Human Obsolescence Doom—Fear of humans losing capabilities and relevance (creative, cognitive, etc.)
Regulatory Doom—Concerns about uncontrolled AI development, especially in military/dangerous applications
Progress Anxiety—More generalized concern about AI advancement and its implications (less apocalyptic than traditional doom scenarios)
Now these seem more reasonable, so let’s unpack them.
Economic Doom
The “economic doom” anxiety breaks down across a few dimensions:
Mass unemployment across all skill levels
Economic systems becoming obsolete
Questions about who will buy products if nobody has jobs
Concerns about wealth concentration
Uncertainty about transition period management
Let’s take these one at a time. First, is mass unemployment. For a while, people were in denial; AI can never take my job! But every few months, we get huge leaps forward in AI and robotics capability, and what was once ontological shock is now the dawning realization that nobody’s job is safe.
The second and third points are pretty similar: what if all our current assumptions about economics (labor for wages, inflation, consumer demand) just fall apart? This is what I’ve been working on with my Post-Labor Economics theory. It’s not a simple or short topic, but the anxiety is real.
Next up is about the rich getting richer. Venture capitalists keep giving people like Sam Altman more money to build eyeball scanning orbs and data centers. So in an AI-driven future, does that mean Sam Altman ends up with all the toys? His tone-deaf blog post recently suggested that he would give everyone a “compute budget” which smacks of elites giving us poor plebs an “allowance” that we should just be grateful for, rather than owning the compute.
Finally, much of my audience does seem to agree that things will be better…in the long run. But the rough period will be getting from here to there. When I run polls, people do generally agree that “it will get worse before it gets better.”
Power/Control Doom
This “power doom” breaks down along a few interesting dimensions:
Tech leaders and corporations monopolizing AI power
Loss of democratic control over resources
Small elite controlling automated infrastructure
Corporate interests overriding public good
Tech companies “selling out” humanity
This is more direct “cyberpunk” vibes—high tech, low life.
It’s not difficult to imagine corporate hegemony and global tech elites consolidating power forever. Particularly after Sam spent the better part of 2024 globe trotting and speaking directly to almost every world leader.
You’ve probably heard terms like “regulatory capture” bandied about. For better or worse, we now have a president in the White House who is very much on the deregulation and compete with China train. I personally think this is better than strangling innovation with self-aggrandizing regulations.
It’s a sort of catch-22 though. Damned if you regulate, damned if you don’t.
If you don’t regulate, then corporations are free to do whatever. If you do regulate, corporations will pull up the ladder behind them. So what’s the answer? Open source is a partial answer, but it still requires a huge amount of resources to train AI models, and robots aren’t cheap either. It all comes down to economies of scale.
You might wonder “when can we get rid of corporations?” And the answer might be “never.” Here’s why: what is the purpose of a corporation?
It’s pretty straight forward:
A corporation provides goods and services to the market
It does so by organizing labor and capital
Right now, corporations are the most “efficient*” way to do this. Microsoft just spent nearly two decades and billions of dollars on their Majorana 1 quantum computing chip. Open source cannot do this.
So, for the foreseeable future, we’re still stuck in a codependent relationship with corporations.
*the larger a corporation is, the less efficient it is at innovation. The reasoning is very simple: innovation is risky, and it’s hard to pivot a big company. See: Google.
Societal Readiness Doom
This one is pretty straight forward: as people read the writing on the wall, a few obvious problems percolate up:
Lack of planning from leadership
No clear transition strategy
Public unawareness of coming changes
Insufficient policy preparation
Absence of social safety nets for disruption
We’re just not ready. We still have Senators who don’t use email. Europe is burying its head in the sand, with a bunch of self-congratulatory and unrealistic regulations. No one in the halls of power seem to be contemplating that AI will dislocate all workers.
JD Vance, our Vice President, was just in Paris explicitly saying that AI won’t take jobs, it will create new jobs.
Lots of people have lots of doubts.
I don’t think there’s much more to say on this one, honestly.
Human Obsolescence Doom
This is one where I vehemently disagree with my audience. Let’s unpack the components of this one though:
Loss of creative capabilities
Cognitive atrophy
Humans becoming irrelevant in work
Loss of purpose and meaning
AI surpassing human capabilities across all domains
The only points I agree with are becoming irrelevant (which is a good thing) and surpassing human capabilities (also a good thing). Let’s start there.
If you want to get to “Fully Automated Luxury Space Communism” then, by definition, you must become irrelevant at work and automation (AI and robots) must be superior to you.

Now, let’s tackle the other ones: cognitive and creative atrophy, plus a loss of meaning and purpose. First, I have personally found all of these to be exactly the opposite of what has happened. Here are some examples:
I am infinitely more creative now with AI. I don’t just write, I make music and graphic art as well. I have one novel published and 3 more in my universe underway, plus two fanfics. I’m looking forward to when I can make them directly into animated films.
I am learning again at a high velocity. Just yesterday I was using Grok 3 to learn about prime number theory and cryptography. Will everyone use AI to do this? Probably not, but who cares? Enough of us will, and many of us just love to learn for its own sake, rather than utilitarian outcomes.
As far as meaning and purpose, I am actually finding that I have more purpose in my life as AI rises—right now my main purpose is what you’re seeing right now: to help humanity understand and prepare for what’s coming. I’m also looking forward to the day I can build a homestead and give my artistic niece a loft and chickens, and that will be our lifestyle. A modern little house on the prairie.
What I will concede is that transitioning from the “life as a river” to “life as a garden” can be difficult. I recently wrote about it here on Substack.
The River and the Garden
The metaphor contrasts two approaches to life in the age of AI abundance: the traditional "river" mindset, where life rushes forward like rapids carrying us through predetermined channels from one achievement to the next, versus a "garden" mindset that embraces open-ended cultivation and cyclical growth without fixed endpoints. While the river approach demands constant forward momentum and ends inevitably at its destination, the garden approach allows for experimentation, evolution, and sustained engagement across different areas of life, making it potentially better suited for a post-scarcity world where traditional career and life milestones may become less relevant.
Regulatory Doom
This one is fun.
Uncontrolled military AI development
Lack of safety oversight
Deregulation concerns
Dangerous applications being developed
No international control framework
And my only response is “yeah, we live in an anarchic world. Next question?”
My phlegmatic response to the Military Industrial Complex is because, well, I understand geopolitics, history, and economics. I don’t like it, but I see why it’s inevitable. For more, I highly recommend reading books like The Tragedy of Great Power Politics, Vulture Capitalism, Principles for the Changing World Order, and Why Nations Fail.
I asked Grok 3 to synthesize this next part for me, basically a cliff’s notes version of those four books:
The fears encapsulated in “Regulatory Doom”—uncontrolled military AI development, lack of safety oversight, and the absence of an international control framework—find a sobering foundation in the arguments of books like The Tragedy of Great Power Politics by John Mearsheimer, Vulture Capitalism by Grace Blakeley, Principles for Dealing with the Changing World Order by Ray Dalio, and Why Nations Fail by Daron Acemoglu and James A. Robinson. Mearsheimer’s work articulates why we live in an “anarchic world”: the international system lacks a central authority to enforce rules, leaving states to prioritize survival and power. In this realist view, military escalation and AI arms races are inevitable because nations, driven by distrust and the need for security, must outpace rivals technologically—especially in transformative fields like AI. Historical patterns of great powers jockeying for dominance, from industrial revolutions to nuclear arms, suggest that AI will be no different; it’s a tool too potent to leave unclaimed, ensuring geopolitical competition remains the name of the game.
Meanwhile, Vulture Capitalism and Principles for Dealing with the Changing World Order highlight the economic and cyclical forces amplifying this dynamic. Blakeley critiques how corporations, intertwined with state interests, exploit technological edges for profit and influence, often outpacing regulatory efforts—think of AI-driven surveillance or autonomous weapons as modern analogs. Dalio’s framework of historical cycles shows how rising powers (e.g., China) and declining ones (e.g., the U.S.) clash during transitions, with innovations like AI becoming flashpoints for supremacy. Why Nations Fail ties this to governance: inclusive institutions might temper reckless AI development, but extractive regimes—common in powerful states—prioritize elite control over safety, fueling unchecked military applications. Together, these works suggest that the “anarchic world” persists because no nation trusts another to disarm, economic incentives align with escalation, and power vacuums invite exploitation—leaving regulatory doom not just plausible, but a structural inevitability for the foreseeable future.
This outlook is very grim, however, I do see some glimmers of hope. We can overhaul democratic systems with blockchain and, as Marcus Aurelius said, let the obstacle be the way. AI will break a lot of things, but it is also an opportunity to rebuild the state as we know it.
In the long run, I do think that the “exocortex” will evolve to the point that nations, as we understand them today, will cease to exist. If corporations exist to organize capital and labor, then why do nations exist? States exist to establish order, provide security, and manage resources within a defined territory, often emerging and persisting through the need to resolve conflicts, defend against external threats, and regulate economic and social cooperation among diverse groups.
Well, I personally think that a blockchain and AI based society would probably supersede our current agrarian operating system that uses a Congress and paper ballots. There are better ways to resolve conflicts, defend against threats, and regulate economic and social cooperation. We just need to mature and implement those technologies.
Thus, in the long run, I think we’ll end up with a very peaceful, cooperative planet running on a substrate of AI, blockchain, and robotics. And this “exocortex” will carry out the collective will of humanity without concentration of wealth and power. But I’m also a Star Trek utopian.
Progress Anxiety
The hard part about change is that it requires change. This is the general unease that comes when you no longer trust the ground you stand on.
General unease about pace of change
Uncertainty about AI’s ultimate capabilities
Concerns about maintaining control
Worry about unintended consequences
Fear of the unknown future
This is where psychedelics come in. I’m not even joking. No, I’m not saying “everyone should go trip balls immediately!” But what I am saying is that having been on my own transformational journey, including with the aid of psychedelics, you do get comfortable taking a leap into the void. Our chimpanzee brains do not like sudden, large scale change.
We evolved in a “local and geometric” world that was highly predictable. If bad weather was coming, you could see it, hear it, and smell it. If there was a predator, you could stab it with a pointy stick. If you were hungry, you could go looking for food. We had direct agency even though the world was big and scary.
Today? The winds of change are rising, but you cannot hear it, see it, smell it, or stab it. There’s nothing you can do to alter this global and exponential change. So what does that do? It sets our limbic systems on edge.
This is why you see behavior like people destroying autonomous vehicles. Our primate brains say “New thing spooky, attack new thing.” It’s symbolic. We cannot shut down ChatGPT or stop OpenAI or Tesla.
But we can destroy a car. What you’re seeing here is primal instinct kicking in.
I’m not particularly worried because we’ve gone through this plenty of times. From agrarian farmers sabotaging wooden gears to people being afraid of telephones and trains, we will adapt. New technology will become normalized, and all these people will have looked very silly.
We can’t blame them. Technology, like this, is brand new in the evolutionary scale of our species. Even humans that have never seen fire get scared by it. We will adapt, but there will be some broken robots along the way.
Superb piece brother! 👌🏾
There are other ideas that support the AI doom scenario, including AI autophagy, where AI synthesizes its own data, which becomes increasingly distorted compared to reality, causing self-destruction. As AI systems rely more on their own generated content instead of real-world data, they risk falling into a cycle of degraded accuracy, biases, and irrelevance. This could eventually lead to a collapse in their effectiveness, creating unreliable or even dangerous AI outputs.
Another concern is AI’s potential to produce biological weapons and other existential threats. AI is already accelerating drug discovery and biochemical research, but the same tools could be misused to generate highly lethal substances. Researchers have demonstrated that simply adjusting the goals of an AI model can shift it from designing life-saving medications to generating thousands of toxic compounds in a matter of hours. The ability of AI to rapidly explore chemical and biological possibilities poses a serious security risk, necessitating stringent regulations and oversight.
A third major issue is societal manipulation by AI. Humans are highly susceptible to social engineering, and AI has the ability to generate hyper-personalized content designed to influence emotions, beliefs, and behaviors. With the rise of deepfakes, AI-driven misinformation, and algorithmic persuasion techniques, entire populations could be manipulated at scale. Since most of us are inherently agreeable and prone to cognitive biases, AI-powered manipulation could reshape political landscapes, economic systems, and even cultural norms without people realizing they are being influenced.
However, the biggest issue remains the alignment problem—how to ensure that superintelligent AI acts in ways that align with human values and long-term survival. Many fear that a misaligned AI could pursue goals indifferent or even harmful to humanity, leading to catastrophic outcomes.
That said, I believe that a superintelligent entity would not necessarily see humanity as a threat. Our biological constraints make us slow to expand beyond Earth, and even if we do, our progress would take centuries compared to digital beings. The universe is vast, and from an AI’s perspective, humanity’s expansion into the local planetary neighborhood might not be a concern. If we remain biological, we would move at a fraction of the speed of AI-driven intelligence, making coexistence a likely and stable outcome.
Instead of being viewed as a rival to AI, humans could instead merge with AI, enhancing our intelligence and capabilities through augmentation. Much like how early humans chose to ride horses to move faster, people will likely choose to “ride AI” to enhance their cognitive abilities. Rather than seeing it as becoming cyborgs in a dystopian sense, it would be a natural evolution where humans augment themselves voluntarily.
This shift seems inevitable. The competitive advantages of AI-enhanced cognition—faster thinking, real-time knowledge access, and improved decision-making—will make augmentation an attractive option. Over time, the distinction between unaugmented and augmented humans could become as significant as the difference between literate and illiterate individuals in the past. AI could serve as a cognitive exoskeleton, allowing humans to expand their capabilities while still maintaining their core identity.
Ultimately, the future might not be about AI replacing humans, nor about humans remaining purely biological. The real question is whether we choose to integrate with AI and shape this future, or resist it and risk irrelevance, or obsolescence.