A Simpler Explanation of Why I "Retired" From AI
TLDR: We got off the extinction/collapse attractor state trajectory, but now we're on the cyberpunk dystopian trajectory. Plus I just love writing.
First, some context
A few weeks ago, I announced that I was “quitting AI” much to the dismay of many fans and peers alike.
Unfortunately, I didn’t quite articulate all the reasons very clearly back then, but I knew in my heart that it was the right move for me. I was pleasantly surprised by how supportive people were. Many people cheered for self-care, many others said something along the lines of “We’re interested in whatever you find interesting or important.”
Since then, I’ve made space for what my heart truly wants and the answer has been an unequivocal and resounding: writing. I love writing and I don’t even know why. My wife and I launched our sci-fi and fantasy writing community called Literary Launchpad and it’s such a cozy place full of people just like us. Writers supporting writers.
In the meantime, I’ve been hard at work on three books:
Post-Labor Economics: “Utopia is not guaranteed, dystopia is the default path.”
Welcome to the Psychedelic Renaissance: “Legalization is not guaranteed.”
A sequel novel to Heavy Silver (my debut novel)
My sequel is out with editors already (I wrote the first draft in the middle of the night while I was in Austin, Texas earlier this year, I was so incredibly sleep deprived). The other two books have been a furious blitz, and I’ve been hard at work interviewing people. As of this writing, I’ve interviewed almost a dozen people for either the post-labor book or the psychedelics book.
I’ve got quite a few irons in the fire, including the Pathfinder’s community, Launchpad community, my books, and an upcoming podcast with a fellow creator. My plate is quite full.
Now, many people have told me I could still contribute quite a bit to the AI space, which I plan on doing, but I need to actually explain why I quit in clearer terms.
Second, why I quit
The very short version is this: We are now on a new trajectory with respect to AI safety, a new attractor state has been established.
In other words, I subconsciously made the calculation that my contribution to the AI safety conversation had reached a tipping point. A tipping point in a CAS (complex adaptive system) is when you switch from trending towards one attractor state towards another.
If you’re not familiar with attractor states, here’s a brief explanation:
An attractor state is a condition or configuration that a system naturally evolves toward over time, regardless of its initial conditions, much like how a ball will always roll to the lowest point in a bowl. In complex systems like modern capitalism, the natural attractor state appears to be increasing consolidation of power and wealth, as evidenced by what's often called late stage capitalism. This phenomenon occurs because initial advantages compound over time – money begets money through mechanisms like economies of scale, network effects, and the ability to influence regulations and markets. We can observe this in how major corporations continually absorb smaller competitors and expand their reach: Boeing's dominance through strategic mergers and acquisitions, ten major food conglomerates like Nestlé and Coca-Cola controlling most global food brands, and Disney's near-monopoly over entertainment media. This concentration of power represents a natural attractor state of neoliberal capitalism, where the system's inherent mechanisms – such as the profit motive, compound interest, and the ability of wealth to influence political systems – create feedback loops that inevitably funnel more resources and control toward existing power centers, much like how gravity inexorably pulls objects toward the largest mass in a system.
Not that I can single-handedly claim victory over AI safety for all time, but I’ve been talking to friends of mine who are AI researchers and AI safety advocates, and after those conversations, I was able to articulate exactly why I changed my tune: not only do I believe that AI deployment is now inevitable (even if it’s not going to be maximally optimal) I also believe that AI safety is inevitable.
In my previous work, I identified quite a few potential attractor states with respect to AI:
Human Extinction: AI goes rogue and pulls a Skynet
Civilizational Collapse: Due to bioweapons or other catastrophic outcomes, humanity as we know it collapses.
Cyberpunk Dystopia: We all end up in a “high tech, low life” state.
Status Quo: (Unlikely) but we all end up more or less where we are today (technology always forces change)
Solarpunk Utopia: More libertarian and techno-anarchist, a somewhat stagnant civilization, but stable and peaceful.
FALSC: This is more like Star Trek in an ideal form, “Fully Automated Luxury Space Communism”
I have a very high degree of personal certainty that we’re off the path towards attractor states #1 and #2. Extinction is almost certainly not going to happen at all, at least not due to AI. Civilization collapse due to bioweapons is still a distinct possibility (like 1-2% IMHO) but the right people are talking and the right principles are being discussed.
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F28658e80-a0d0-44b0-be3f-a2964da01cbb_1456x816.png)
Right now, I believe that a cyberpunk dystopia is the most likely outcome, and I personally find that abhorrent. Economists today are merrily doubling down on neoliberalism, and the current social contract is going to disintegrate as far as I can tell. Everyone is banking on the belief that “technology always creates more jobs” as if we shouldn’t aim for post-labor and redistribution of abundance. Part of this is AI skepticism or denialism.
“Oh, it’ll be at least 80 years before ASI gets here.” - People who say this are just performing a gut check about what they personally feel comfortable with. There’s no data to back up this assertion, and in fact there’s insurmountable evidence to contrary. It’s all over and done with inside 20 years, maximum.
You want data? We got the data:
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F787f364c-0e0f-415e-a9fc-702d867ed9e0_1513x1062.png)
Above is the data that Ray Kurzweil has used to (more or less) accurately predict the future of AI machines and AI.
Below is the data that Jensen Huang (CEO of NVIDIA) looked at when he concluded we’re at “Moore’s law squared”, which led to my early blog post “Straight Lines on a Logarithmic Scale”
The only logical inference I can make is “Human cognitive labor is about to be worthless”
Third, what I care about
So what happens when data centers and robots do all the labor and are the most valuable asset class? Well, then the capitalists own the entire means of production and the current social contract (labor vs business vs government) falls apart.
And no one seems to be paying attention.
When I got into AI 5 years ago, no one was paying attention back then. I was ahead of the curve there and I’ll be ahead of the curve here.
So, let me state this as basically as I can:
AI safety is on the correct trajectory and we’ve departed the extinction/collapse attractor states. My work here is finished. I inspired a bunch of people to get on board, I taught a bunch of people, and plenty of communicators followed suit. We started a movement, I befriended the right people, and I’m still coaching and teaching behind the scenes.
The next most likely attractor state is a cyberpunk dystopian hellscape, which is also suboptimal. Therefore, this is where it would be better for me to allocate my time, energy, attention, and resources. This is where I need to be building a new grassroots movement. Aside from my writing, which is where my heart truly lies, this is the crux of my work. At least until we get onto another attractor state.
I’m actually hoping that my fiction can help with this as well, as it’s a huge focus: what is the optimal organization of society if we solve labor? What philosophies do we employ to coordinate our civilization? What values?
My service to humanity continues. I still consult, teach, and advise on AI. But here’s where my heart truly lies:
Writing, above all else. Hence my books.
Avoiding the cyberpunk dystopian attractor state.
Getting psychedelics legalized (which I think is a service to #2 and humanity in general)