The Messy Middle
The vibe has changed, but it's more than that
Starting late last year, I did not like the vibe in the AI space. I mentioned this in several videos and tweets. Things are just… weird.
I now have an answer to what’s going on, and the Anthropic v. Pentagon saga has crystalized it.
We are now in the “Messy Middle” of the AI story.
Let’s break it down based on storytelling archetypes. Up until now, we were solidly in Act One of the AI story. The world was still mostly familiar. AI threats were mostly hypothetical, like the Hobbits drinking at the Green Dragon talking about dwarves and war beyond their border. That was before Gandalf came back and said shit just got real.
Those were the good old days when p(doom) and x-risk were still hypothetical arguments, where CBRN discussions were more like bogeyman stories, and the nature and flow of power was still abstract.
In storytelling, there is a plot beat called the “Break into Two” moment. This is when the character is thrust (often forcibly) into the new world, where everything changes. For Frodo, this was when Gandalf came back and said “You must take the Ring.” This is the end of innocence, the moment that things become real to the protagonist.
Up until now, AI job loss, video generators, and “slop” was mostly just the background noise, the grumbling before the storm. But now it’s all becoming very real, very quickly.
Act Two of the AI story is characterized by complications. We need to differentiate this from complexity. Complexity denotes many intersecting systems, which has always been true. AI has always intersected education, economics, military, and culture. That hasn’t changed. What has changed is that the stickiness is rising, which is making it complicated. In this context, “complicated” means “many mutually exclusive and antagonistic relationships, stakeholders with diametrically opposed beliefs, and everyone fighting over control of the ball.”
For the EA (Effective Altruist) AI safety crowd, this has always been the case. They have viewed the entire game as existential the entire time, hence the energy they’ve put into controlling narratives. However, the last 12 months have been devastating to them. First, the Trump admin rescinded all the Biden-era protections, deregulated, accelerated, and now has labeled Anthropic a literal national security threat. If you’re an EA or Rationalist, you basically cannot lose harder. Trump and Hegseth explicitly called out Anthropic for being “woke” and “left” and even called out their “sanctimonious EA rhetoric” directly.
That, to me, is the clearest signal that we have entered into a new regime. This new paradigm has “put the fish on the table” (a business term for addressing the stinky thing that no one wants to talk about). The stinky thing has been the narratives, the unspoken tension, and the beliefs that have been cultivated for years, sometimes in a vacuum, as they collide with reality.
There are many complicated realities now clashing.
Legal and contractual realities vs vibes and ethics
Military and national interests vs civil liberties and private interests
Cybersecurity and geopolitics
Misinformation and disinformation campaigns
Jobs and copyright vs innovation and progress
Obviously, real life is not an idealized movie script. There are rarely clean “plot beats” IRL but in this case, there was a definitive “Before” and “After” the Anthropic/Pentagon beef.
What should we expect during the Messy Middle?
Unpredictable and emergent changes. One example is that some of the EA crowd has apparently done an about-face on government. Their original theory was “steer from within” by seizing control over the narrative, convincing the government to pause AI, shut it all down, or at least have an “inside man.” But with Anthropic being unceremoniously ejected from the establishment, literally overnight, some of the diehard safetyists are now saying “the government was never going to be on our side.” Not every group or stakeholder will have that level of polarity, but this is the nature of high uncertainty. Ideas can flip-flop. Someone might be pro-AI until it takes their job. Someone might be anti-AI until it saves their life.
Narrative polarization intensifies. If you thought Accelerationists vs Doomers was polarizing, you ain’t seen nothin’ yet. My audience has long been worried about jobs, and in my research, people will bear a great deal of hardship. What they generally do not tolerate is going hungry. Especially not when there’s an obvious elite cadre who are responsible and still doing just fine. Every sector has already had some debates: what role does AI play in education? What about dating? What about creativity and art? But it’s a different matter entirely when it becomes obvious that everyone’s livelihoods are genuinely on the line. It’s going to feel like the Black Death, you never know who’s going to get struck by the AI layoff next. And that makes people angry.
Tribal battle lines harden. It was odd to me, at first, to see AI become so political so quickly. I was labeled “right wing” (and worse) simply for being pro-technology and pro-AI. I’ve been labeled MAGA for pointing that Anthropic played with fire and got burned. But it basically comes down to “Red team is pro AI, therefore anyone else who’s pro AI gets lumped in with them.” There’s different terms for this, but one that sticks out in my mind is “identity stacking.” For instance, if you drink soy lattes people also assume a great deal about you—that you’re a Democrat, that you support DEI, that you’re pro-Palestine, and so on. Conversely, if espouse any “trad” values like a preference for monogamy and family, that you’re right-wing, a boot-licker, and so on. People look for totems and talismans to make instant, snap judgments. I call these “epistemic tribes.” And those tribes are solidifying and mutating at the same time.
And none of that has to do with technical capabilities. We’re barely out of the chatbot era, and video generators still mess up fingers. Just imagine how much worse it’s going to get as AI surpasses humans across all cognitive dimensions, as robots invade job sites, and all digital goods can be rendered on the fly, in real-time, with no human input.


Illuminating article as always!
Im someone who canceled chatGPT pro this week. My gut reaction was “F them! They ain’t getting my data!” I was team Anthropic all day.
But now, I’m rethinking it. The lines are blurred
Also think it’s funny that the”red” side wants more AI. I feel like their jobs and livelihoods are most at risk. I work in tech and am constantly at the cutting edge so I’ll be ok…hopefully lol
Your articles, knowledge and writing bend my brain and open up my mind to ideas I never even knew about. Thank you for that. Some of what you say I don’t fully understand but it’s a learning journey of expanding my mind ✌🏻