What do you mean by "Doomer"?
The term Doomer has become both a pejorative insult as well as an epistemic label. Let's unpack what people mean by "Doomer" and explore the fascinating realm of epistemic tribes in the AI space!
Context: Twitter
I recently got back on Twitter as part of my platform. I was reading the book SUPERFANS by Pat Flynn and one of the things he recommends is “go towards the conversation.” I figured that was easy enough. I have, up to this point, stayed out of the status games and epistemic tribes surrounding AI. However, my tribal ambivalence is somewhat hindering my growth as a public figure. In other words, by titrating my messages in order to avoid activating any particular tribe, I was increasingly saying nothing of value. I was not challenging the Overton Window.
Upon re-engaging with Twitter, one of the first things that I attracted was a sort of internet mob in a phenomenon known as “brigading”―when certain “canaries” start singing and calling attention and either deliberately or inadvertently triggering a wave of reactionary criticism. The specific epistemic tribe that I activated was the “Pause AI” movement, some of whom I have participated in their podcasts and otherwise had good relations with.
Before proceeding, I will concede that, on the topic of AI safety at least, “halting AI progress” would be safest option for humanity. It is simply not possible, therefore we need an alternative path. This to me, seems like the most rational choice. What is irrational, however, is doubling down on a failed strategy to the exclusion of more reasonable strategies.
However, during my attempts to engage with them, they continuously dodged, evaded, and shifted the onus to me to educate them. They categorically refused to entertain aspects of my work, such as my work on alignment around heuristic imperatives and axiomatic alignment.
I also asked many people I know, including business insiders, Silicon Valley insiders, and government insiders. None of them believe that Pause is workable or desirable. Even when inviting my audience to steelman the argument, no compelling or systemic counter arguments in support of Pause were offered.
As I attempted to get good information from the Pause advocates, I noticed a series of rhetorical gimmicks and logical fallacies.
Appeal to Authority: The Pause folks bank on “social proof,” namely several famous signatories to letters stating that AI represents an existential threat to humanity. To them, this serves as unassailable evidence of impending cataclysm. When pressed, none of them were able to provide scientific or experimental evidence to support their claims.
Nirvana Fallacy: The Pause tribe believes that “halting AI progress is the only way to protect humanity.” While they also advocate for other, more incremental (and feasible) solutions, they will always reiterate that anything short of halting progress spells inevitable doom for humanity.
Cherry Picking: When engaged with debates, the Pause folks studiously avoid any information that contravenes their foreclosed beliefs, and will dodge any engagement with views, ideas, research, or evidence to the contrary. Furthermore, in their “takedowns” of other positions, they gloss over (or entirely ignore) salient points.
Echo Chambers: Most of the Pause people I interacted with refer to a narrow set of “experts” that they believe in and trust. This is typical of any epistemic tribe i.e. their definition of “truth” and what counts as a “fact” or “expert” is narrowly defined by their own social norms. As one commentator observed “Their argument is dissolving upon contact with reality.”
Whataboutism: Rather than honestly engaging with criticism or scrutiny, most of the Pause AI people will move the goalposts or otherwise throw a red herring into the debate, usually deflecting to the E/ACC movement. “They are the real Doomers!” Another similar red herring was calling Sam Altman a Doomer (that is, that he would be the ultimate agent of human destruction).
Motte-and-Bailey: The motte-and-bailey fallacy occurs when someone conflates two positions: a controversial one that's difficult to defend (the "bailey") and a more modest one that's easier to defend (the "motte"), allowing them to retreat to the latter when challenged while still implying support for the former. In this case, they would make the indefensible argument “ASI will inevitably kill everyone! We must pause!” and then retreat to “Well, we’re calling for some common-sense regulation.”
Given their lack of evidence, and these myriad logical fallacies and rhetorical dead-ends, I have decided to fully disengage with the Pause movement. My personal conclusion is that it has no substance, is predicated upon flimsy assumptions, and is a grab for attention vis-a-vis “if it bleeds it leads.”
A Friendly Warning
As I was mired in these debates, a trusted friend texted me, warning me about the wording in one of my tweets. They were concerned about the rhetoric I was using, particularly when I used the term “Doomer.” After some discussion, it was clarified that my friend and I have had fundamentally different experiences in dealing with the various epistemic tribes on the internet.
In my case, I was being brigaded by the safety side of the AI debate. In my friend’s case, they’d been brigaded by the accelerationist side of the debate. We quickly realized that “political horseshoe theory” applies to AI as well.
To provide context, my friend had been mobbed as a “Doomer” by the accelerationist community despite their personal assessment of AI X-risk to be relatively low (all things considered) and relatively close to mine. They explained that, particularly to the E/ACC community, any appraisal of AI risk means you’re a Doomer. The irony here is that, despite the fact that my friend and I had similar risk assessments, members from opposite tribes had attacked us. Let me emphasize this: reasonable and clear-headed assessment of risk is not tolerable to either E/ACC nor Pause AI. Consider this: A broad survey of AI experts revealed a median X-Risk of 5% to 10%, yet the most aggressive Pause AI advocates argue that the “vast majority” of experts are worried about X-risk. Logical fallacy after logical fallacy, combined with internet mobbing convinced me that this is not a healthy epistemic tribe. As Will Storr describes in his book The Status Game, this is a hallmark of a “narrowing status game” where purity testing, virtue signaling, and in-group vs out-group invective intensifies.
My friend told me that, despite having a reasonable P(DOOM), they were called a Doomer because they had any assessment of risk. That revelation was a bit shocking to me, as that seemed like an overly broad definition. Conversely, there are X-risk people out there with a p(doom) of 100%―they have foreclosed on any possible future for humanity. More on this later in the article. It is these more extreme cases for which I personally reserve the term “Doomer.”
Because of the mobbing I received, as well as the backlash for refusing to debate the Pause AI movement, I circulated the events with trusted friends, peers, and advisors. No one I consulted recommended engaging with the Pause movement (except one who encouraged me to do it for amusement value), and in point of fact, one insider called them “dangerously naive.” At that point, I realized that this was a far more interesting psychosocial phenomenon, going way beyond AI. This was a case study in status games and rhetoric!
I participated in good faith, I did my homework, and I explicitly told them that I wanted to strengthen their argument, and I was attacked for it.
Toward Semantic Clarity
I explained all this context to Claude and asked it to help derive a spectrum of definitions of Doomer. In other words, “What do you mean when you say Doomer?”
My goal here is neither to validate nor vindicate any particular epistemic tribe, with the exception of those with the most extreme views. I don’t believe it’s possible to dislodge the use of a pejorative in public discourse through artificial means. Rather, these things tend to work themselves out over time as the conversation evolves. My aim here is to advance the conversation by increasing semantic clarity around definitions and call attention to patterns of thought and behavior that are becoming problematic. Gleefully inciting brigading against people with reasonable positions is categorically unhealthy and unproductive behavior.
Now, we must clarify what we mean when we say “Doomer.”
This semantic phenomenon, as far as I know, was first documented in ancient Chinese Confucianism as the “rectification of names.” Within this context, there’s a belief that social progress and harmony are predicated upon the accurate, salient, and poignant correctness of names.
So, in the spirit of ancient Chinese philosophical wisdom, let us rectify the name of Doomer.
When you hear the term “Doomer” what do you think?
My audience has converged on category 4 as the most common meaning of Doomer: someone who is convinced that catastrophe is almost certain. The chief difference between category 4 and category 5 is the presence of a doomsday cult or cult-like behaviors. Roko’s Basilisk comes to mind. In other words, people in category 4 may believe that existential risk to humanity is extremely high without exhibiting cult-like tendencies or joining an epistemic tribe that has given up on humanity.
In Conclusion
By engaging with the conversation, I stumbled into a sort of hornets nest, and realized just how fringe some of these belief structures have become. I have been involved in internet debates for more than 20 years now, so I’ve seen all the rhetorical tactics before. Back when we didn’t have twitter, only BBCode forums, we ended up in a sort of intellectual codependency, as forum admins were only there to moderate on behavior, not litigate on facts or salience. The internet, after all, is somewhat anarchic, and the platform owners are incentivized to keep engaging up for advertising revenue.
I remember, in stark detail, going down all the various rabbit holes over gay rights (remember when that was a thing?) I remember arguing against what people now call “gender essentialism” though we didn’t have that term yet. We went down every semantic debate: what do you mean “gay”? To which the answer was “Here’s the Kinsey Scale!”
What I learned, all those years ago, is that no one on the internet ever admits they were wrong. They just quit the battlefield. Once the armchair debating is over, the denizens of the internet grumble and excuse themselves, general with some retort “I’m not dealing with this anymore” and shuffle on. While the AI debate is far from this point, I recognize many rhyming patterns. As more scrutiny and pressure builds against the Doomers, I am hoping for the same cycle of discussion. We are presently seeing a similar pattern with the “trans debate” as well as DEI. The tide has turned against anti-trans advocates as well as against DEI, though most people are not versed enough in rhetoric to see it. Time will tell.
The AI safety community was a very small, insular group for many years. Given this fact, it had ample time to establish its own status games and social norms around what counts as “facts” and “experts” and “evidence.” For some of them, a good argument on LessWrong is more valid than contemporary science. I have frequently accused this epistemic tribe of engaging in hand-waving and prophecy. They care more about their emotional gut check and the vibes of their community than evidence.
This is one of the chief conventions of this epistemic tribe: they treat the word of people like Yudkowsky and Yampolsky as gospel, and LessWrong as scripture. From my irreligious perspective, this tribe is oddly zealous, fanatical, and dare I say, dogmatic. Ironic, considering they label themselves “rationalists.” They have elevated the most irrational commentators in the space to sainted status. The more the fringes double down on their prophecy of doom, the more fanatical their acolytes become.
However, now that AI has broken into the mainstream, many of the more staunch AI risk mavens are simply unfamiliar with and uncomfortable with rhetoric, debate, and public discourse. Once they leave their echo chambers, they are somewhat shocked to realize that most of the world does not share their views, and their hyperbolic claims serve only to alienate them further, sending them scurrying back into their warrens. As they have deeply internalized Pause and Doom as part of their identity, this provokes a groupthink “immune response.”
I am not the only one speaking out against this nonsense:
I say this as someone who started in Doomer category 3 or 4 until I looked at the science and conducted my own experiments. I am now solidly a category 1 Doomer: “There is some risk from AI, but I see no evidence of the more hyperbolic predictions or extreme hypotheses put forward by the hardline AI safety community.”
I revised down my X-risk or p(doom) by virtue of engaging with the most hyperbolic AI safety crowd in good faith. They presented their best arguments, and I used this to create a risk assessment profile using their own rationalizations. When I plugged in numbers from my crowd, and arrived at a risk of 12.7%, I was called deranged by members of the AI safety community. My personal, unfiltered p(doom) is now less than 2%, see below for rationale and data:
Finally, I will concede that I have taken a stance against the more hardline Doomer/safety community, as I see numerous fatal flaws with their social norms, rhetorical patterns, and epistemic beliefs. Furthermore, the longer I reflect on their espoused beliefs and current behaviors, the more dangerous I believe they are. When someone believes that humanity is inexorably headed for calamity, they may be willing to commit increasingly extreme acts. The slow but steady escalation of their rhetoric should be cause for alarm.
“Anyone who can make you believe absurdities can make you commit atrocities” ~ Voltaire
Forget about academic positions on status. Figure out what you actually stand for and why. Most of the concern around AI, that I am hearing, is related to the inevitable arrival of AGI robots. Not so much how soon, how powerful, how much more intelligent - but just the fact of it, and what it’s going to mean for the country, world and universe.
If you choose to fixate on the fringes of the argument, you are missing the majority middle ground. I realize it’s less sexy, less charged, less divisive. But again, figure out what you actually stand for, actually believe, and want to amplify.
Forget status, likes, subscribers, gossip, pissing contests, big declarations, etc.
Unless that is your thing, of course.
You consider a risk of 5-10% of TOTAL EXTINCTION, to be unworried? I guess that’s personal preference but I consider it rather high.