Just let go and dance: A crash course in Attractor States
Here's how I see it all playing out with respect to AI, robots, quantum, politics, and economics.
When Thanos whispered “I am inevitable” this was a mythic example of an attractor state. In plain english, an attractor state is the inevitable conclusion of complex systems; all road lead to Rome. Late Stage Capitalism is the attractor state of neoliberalism: concentration of wealth, disempowered middle class, and all that jazz.
I first came across the concept of the attractor state when my audience turned me on to Liv Boeree and Daniel Schmachtenberger. Liv is the poker star who has set herself up as the shieldmaiden against Moloch—the god of toxic competition. Daniel, on the other hand, is the “metacrisis” guy.
Daniel Schmachtenberger’s conception of the “metacrisis” refers to the interconnected system of existential and catastrophic risks facing humanity that stem from our advanced technological capabilities coupled with inadequate coordination systems. The metacrisis encompasses environmental degradation, nuclear threats, and emerging technologies like AI, viewing them not as separate problems but as symptoms of deeper systemic issues in how humans make decisions collectively. Schmachtenberger argues that resolving the metacrisis requires fundamentally redesigning our social, economic, and governance structures to align human incentives with long-term flourishing rather than short-term extraction.
Now, I need to be perfectly transparent: I’m not generally in agreement with either of them anymore. Liv describes herself as “pathological competitive” and yet she advocates against competition. Personally, I am a believer in the philosophy of the Grand Struggle, that competition is actually incredibly generative—evolution, free markets, democracy, debates, and yes, even war. Moloch and Thanos are both archetypes of competition. As for the metacrisis, it’s easy to paint with a large brush to say it’s all inextricably linked, but this argumentum ad complexitatem glosses over the fact that all complex systems have boundaries, braking systems, and diminishing returns. Negative feedback loops can limit the blast radius of certain problems. That’s not to downplay the reality of domino effects, but you don’t need to reinvent the wheel here.

If you’re like a lot of my audience, you’re worried about things like jobs and money. If AI and robotics do truly upend the economic paradigms we’ve come to know and rely on, then what? If the wage-labor social contract breaks down, and there’s no safety net in place, what then?
Think about it. AI is getting smarter by the week right now. High dexterity robots are on the horizon, while humanoid worker robots are already here. What is the gravity well telling you? This stuff ain’t slowing down. In fact, investment is only ramping up. The capitalists have dollar signs in their eyes and the revolving door between government and business means that politicians are already bought and paid for.
Better, faster, cheaper, safer. That’s what the attractor state is. We’re locked in, caught in the event horizon of this state, and right now, we’re just circling the drain, so to speak. Like it or not, the vast majority of jobs that we know are going the way of the dinosaur.
It won’t be unidimensional. In 2014, NASA ended its LADEE (Lunar Atmosphere and Dust Environment Explorer) mission by deliberately crashing it into the surface of the moon. But it wasn’t a straight shot like shooting at a target. The craft had been orbiting the moon for 7 months and so it had to use the last of its fuel to create an elliptical orbit, spiraling out and then back in until it skipped off the top of a mountain range.

Thing about the spiral is that it is neither upwards nor downwards. You simply orbit closer and closer to the final state, sometimes improving, sometimes degrading, but with every pass, you get closer and closer to the terminus.
In many cases, it’s difficult to predict what’s beyond the terminal state. Sometimes all you can see is the eschaton—the end of all things. This is where the Doomers come in. The world they think they know is coming to an end. In this, we all agree. Our world is doomed and ending. It’s natural to react to this revelation with fear and trepidation. After all, we are little more than anxious chimpanzees, cursed with big brains.
This is all what Ray Kurzweil called the Singularity, the point beyond which it is impossible to predict what will happen. Interestingly enough, the singularity is what happens at the bottom of a black hole as well, a maximally powerful gravity well. All confidence intervals trend to zero.

It’s interesting to me how different people fixate on different attractor states:
Money and jobs: How will I take care of my family? This is a valid concern but history teaches us that “let them eat cake” doesn’t work, and the peasantry tends to rise up and revolt. Political leaders, despite what most internet comments say, are smarter than that. This will work itself out pretty quickly. Some people will throw temper tantrums and resist change, but when you look at it from first principles, I don’t see anything to worry about. Oh no! We’re creating a period of cognitive hyperabundance! Guys come on.
Corruption and inequality: This one is a bit dicier. When people like Sam Altman and Elon Musk are jockeying for power and control over these advanced systems, it seems like they stand to gain the most. Sam has explicitly said that he hopes OpenAI will capture “most” of the $100T global boost to GDP. That’s a bit delusional, and we’re already seeing his dreams disintegrate as competitors from across the globe catch up and, in some cases, surpass OpenAI. But still, it’s not just that “there is no moat” it’s that all the elites want to maintain power. Okay, granted.
Skynet and Matrix: This fear seems to be waning though I did have one guy ask about this on my first X space. The problem with this is that it requires a lot of assumptions about how AI will evolve—look if we could predict exactly how AI would manifest, then we would all have crystal balls and there would be no stock market. The problem here is that a bunch of high conviction, high confidence, but scientifically illiterate people have sold this doomsday prophecy. Why? It sells. They get fame and money, and occasionally Nobel prizes, for sounding the alarm.
Most of the narratives center around one of these three attractor states. In short: “AI will take my jobs and that’s as far as I’ve thought” or “AI will end in a cyberpunk dystopia forever, and that’s as far as I’ve thought” or “AI will clearly wake up one day and choose violence because reasons, and that’s the end of it.”
Here’s the thing about eschatological myths and parables, for that’s what these all are, is that they are predicated upon a few assumptions:
Our mental models and emotions can be trusted. They cannot.
Our imagination is capable of conceiving all possible outcomes. It is not.
Even with perfect and complete information, it would be possible to predict outcomes. This is not possible.
Now, you might be thinking “Yeah, Dave. We know! That’s why we’re scared and angry!” There are a few “obvious” potential attractor states that we can talk about. I’ve described these in great detail in some of my videos. One way to think about it is a spectrum from “maximally bad” to “maximally good” or “levels of preferability.”
Keep in mind, this is just basic imagination on my part, there are plenty more attractor states that I cannot imagine.
Maximally undesirable attractor state: A vindictive super intelligence tortures us forever. See: I must scream but I have no mouth. Also: Roko’s Basilisk. In short, we all end up as brains in jars locked in a waking hell for all eternity. This is also akin to The Matrix, where the machines design a deliberately miserable simulation to keep us imprisoned for all time.
Extremely undesirable attractor state: Human extinction or irreversible collapse. Think nuclear war or some other apocalyptic outcome in which humanity is either outright eradicated, or we’re otherwise guaranteed to go extinct. Think Skynet or a snowball/greenhouse Earth. The extermination of humankind means that there would be a determinate amount of suffering.
Neutral attractor state: We end up in a cyberpunk dystopia like Cyberpunk 2077 or Blade Runner for all time. The corpos and plutocrats seize control and, with the help of AI and robots, we end up like Matt Damon in Elysium—high tech, low life, for hundreds, thousands, or even millions of years. This is basically the “capitalism, but forever” attractor state.
Moderately desirable attractor state: We end up in a messy-but-functional high tech world like Star Wars. Droids, advanced medical technology, but it all stays very messy and human. Empires still rise and fall, but ideas like democracy and freedom persist. Dune fits into this model. Humans still have wars and squabbles, but most planets are abundant and peaceful.
Maximally desirable attractor state: We end up in a Star Trek utopia where hyper abundance and peace reign. See also: The Culture series by Ian M Banks. Humanity exists in a state of abundance, peace, and prosperity for ever and ever. We spread across not just our galaxy, but the entire cosmos in a constant quest of expansion and understanding.
Which attractor state is most likely? You could say they are all equally possible, but it’s really impossible to know. Personally, I tend to see it as a bimodal outcome: if we get close enough to the “good ending” then compounding returns and virtuous cycles will pull us inexorably towards cosmic utopia. However, if we fail to “stick the landing” then we’ll probably get stuck in a downward spiral that results in humanity’s extinction. This is just my gut instinct based upon all my reading, research, writing, and work on this topic.

We’re caught in what feels like one gravity well, but it might be two, or more. When you’re caught in the attractor state of a binary start system, from far enough away, it just feels like a single gravity well. However, once you get closer, you might get caught between one or another.
But when you think about the various attractor states I outlined above, it might look more like this: five different attractor states (imagine a 5-star system) with some of them stronger than others.

So, with all this in mind, what can we do to alter our trajectory? What buttons and levers do we have to push on our Spaceship Earth? How can we aim for the maximally desirable attractor state?
The number one thing is national policy. Politics, for better or worse, is the biggest rudder we have. Here are some major policy principles that will help:
Regulatory capture is bad. This sets us on the cyberpunk dystopian pathway.
Open source is good. This is a form of transparency which also increases safety research and democracy. It increases free market competition.
Transparency is good. Accountability for corporations and politicians. Information asymmetry is bad.
There are plenty more principles you can imagine and derive from these ideas, but I think you get the point.
On an individual level, here’s what you can do:
Education and advocacy: Teach people about AI and how to use it. The more saturated the world gets with AI competency, the better. We need businesses, schools, and politicians all using AI as much as possible. This will increase aggregate market information about AI, and information is the number one steering tool.
Utilization and deployment: Use AI everywhere, all day, every day. Deploy it at home, at work, at school. Use it to help your children and spouse. Push it on your friends and relatives. Start projects to use AI at work. Again, saturation of information is the number one thing you can do.
I am not particularly worried about it because AI sells itself. And it’s only getting better. I personally believe we’re on the best attractor trajectory, and we’ll go sailing past the undesirable outcomes. The reason I’ve come to believe this is mostly that AI is getting the global attention it needs. The US has woken up and chosen AI. The EU is showing signs of life, and China, Japan, and India are going whole hog on AI.
Brilliant, the right phrase at the right time! I was just writing up findings for a journal publication on using llms to synthesize novel research findings and was looking for a better phrase to describe convergence as our human collective intelligence increases, i.e. as the n increases on what we know that it is available to train a large language model the ability to interpolate has a higher confidence in approximating some ineffable universal truth... or informational universe hypothesis as it were.
Dave, We all want to get to the utopian outcome but your belief that we're on the path towards it already feels more like hopeful thinking. I've been talking extensively to folks in the AI industry and policy makers around the world. The forces towards regulatory capture still exists. The escalation of geopolitical conflict and militarization of AI is increasing. The rhetoric and spend on AI arms race/war is trending higher. The preparation of defense/mitigation for bad actors' use of AI is almost non-existent. The planning for UBI/UBS to deal with job displacement in major countries hasn't started... What evidence exactly gives you confidence we're in the attractor well for utopia? I want to believe, I just don’t see we’re at a point where we just sit back and let it play itself out. (I wish we were)