The "Fiction-Uncertainty-Threat" Axis Underpinning Negative Reactions to AI
The "availability heuristic" is a cognitive failure whereby people believe that easily imagined (or recently seen) possibilities are more likely. This combines with anxiety-inducing uncertainty...

Think fast; when you think “AI” what’s the first image that pops into your head?
Human memory and cognition is largely associative, meaning our brains engage in pattern-matching. This thing is like this other thing. There are several kinds of associations that our brains make; temporal associations are clusters of things that happened around the same time; conceptual associations are clusters of more abstract associations; and so on.
Works of fiction, ranging from The Terminator to Age of Ultron give us immediate apocalyptic imagery that feels real enough. This could be considered a sort of “cognitive priming” in the form of the availability heuristic.
Another thing I’ve noticed about Doomers and people who are terrified of AI, or “rationally” believe that AI will certainly kill everyone, is that they trust their emotions entirely. In other words, they believe that their emotions are legitimate sources of valid, useful, actionable information.
I am anxious, therefore there’s a threat. My anxiety is tuning into something that is totally real! What is the threat? I hear the “boss music” playing in my head, therefore the enemy must be near! Aha! That’s it! It’s the AI!
I’ve noticed that many Doomers come preloaded with a catastrophic worldview, anxiety, or depression. Human brains have a remarkable ability to confabulate. With AI, a person’s preexisting anxiety, depression, or trauma simply gets attached to AI. It’s as though they finally have an explanation or outlet. Aha! This is why I’m so depressed and anxious! It’s clearly [X]! (Note: economic calamity, collapse narratives, climate change, and AI are all often indexed on here).

To restate all this in another way: When we are feeling miserable—anxious, depressed, lonely, etc—we naturally reach for explanations and self-soothing. Latching onto doomsday prophecies is a way to achieve a sort of psychological safety blanket.
“I have no control over AI (aka my depression), but at least I can get some semblance of control by believing that I understand the narrative!”
Here’s how this axis goes:
Works of Fiction provide initial priming via “availability heuristic.” Consider that, to most people, AI is a black box. It is a mystical device onto which they can project all their techno-anxiety and uncertainty. So what do they do? They latch onto images of Ultron and the squiddies from The Matrix.
Periods of Uncertainty, such as climate change, economic crisis, global pandemics, and so on, create a more active, pervasive “threat landscape.” When you feel like you’re on unstable ground, you naturally (and understandably) want to regain some level of control. This is why personal hardship or societal upheaval tends to increase paranoia around conspiracy theories and such.
Belief in Threat seems to be the final piece. “I am anxious, therefore there MUST be a threat, my anxiety is real, therefore the threat is real.” This, when combined with mental priming of fiction and the magnified need for control (such as our recent global pandemic), you end up with people “rationally” believing in AI as a threat.
I’ve had another pet theory that people often project their attachment schema onto AI. People with secure attachment tend to view AI as “just another technology” whereas people with anxious, insecure, or disorganized attachment are far more likely to fall into the tail ends of the bell curve; seeing AI either as a godlike (or maternal) savior, or the clear end to all humanity.

Anyways, none of this is particularly new or insightful. I’m borrowing heavily from behavioral sciences pertaining to conspiracy theories. There are a few extant models to understand Doomerism and threat responses to AI.
Conspiracy Theory Lens—In many cases, a person’s pathologically negative beliefs and attitudes towards AI could be characterized as a conspiracy theory; a need for cognitive closure, certainty, and some measure of control as a reaction to personal pain, anxiety, and uncertainty.
Doomsday Prophecy Lens—In other cases, particularly with the public figures shilling Doomer narratives, it is often shaped as a quintessential doomsday prophecy (to the letter). Doomsday prophecies are particularly insidious because they represent well-crafted narrative templates that leverage vulnerabilities in at-risk populations (such as people susceptible to conspiracy theories).
In Summary
There are a few ingredients that seem to typically go into people who engage in catastrophic thinking with respect to AI.
Trusting their emotions without scrutiny: They feel anxiety (threat response) and believe that this information is accurately representing something outside themselves. I feel threatened, therefore I AM threatened.
Psychologically primed with fiction: Many of these folks are terminally online, inculcated in internet culture or other forms of digital entertainment. This sets them up to have a ready-made (albeit fictional) framework through which to “interpret” AI.
Facing uncertainty or hardship: Whether it is climate change, economic anxiety, personal hardship, a global pandemic; any of these can make people more susceptible to either doomsday prophecies or conspiracy theories.
This is what I have outlined as the FUT axis. Fiction » Uncertainty » Threat. This doesn’t explain all Doomers, and I really had no intention of revisiting this topic. However, at the same time, as AI becomes more mainstream, tension continues to rise. I had hoped that AI would become less divisive over time, but in reality, it has only become more divisive, and I fear this trend will continue potentially for many years into the future.
A Large Caveat
The point of this blog post is not to say “anyone and everyone with concerns over AI is clearly delusional!”—there are many legitimate concerns about AI, and the scope of those concerns is certainly open to healthy debate! I am merely explaining that there seems to be a set of characteristics that cause some people to be more inclined to catastrophic/apocalyptic thinking.
David, I really gained great perspective from this piece. I haven't had this phenomena explained quite like this. I see your connections here and when I test it on the people I interact with on this topic, it explains their doomsday view. What do you suggest will help bring these doomsday people around to a optimistic view?
There are certainly no shortage of dumb, emotional opinions on the internet. But none of what you’ve written squares with the many smart and sober people I read that think AI is might be catastrophic (Zvi Mowshowitz, Ajeya Cotra, etc) and worse, that if things do go bad it will be too late to do anything about it. My view is that if people understood the risks properly they would be, on average, far more concerned than they currently are.
Which brings me to my question - People are storytellers by nature. AI is something that people have thought a lot about, but until now (or really the next 5 years) we could not test theories or separate truth from fiction. Don’t you think it’s appropriate to use what (until recently was) science fiction to develop an intuitive and emotional understanding of what could happen?