What does a world of "Cognitive Hyper Abundance" look like?
Let's explore the first, second, and third order impacts of "solving intelligence" once and for all.
It’s looking increasingly like AI companies, such as OpenAI, have solved intelligence for good. What I mean by that is “the AI can generalize beyond its training distribution.” Which, in plain terms, means it can think outside its own box and solve problems like a human can; by generalizing from first principles and thinking beyond the horizon of its training data.
In other words, Sam Altman’s promise of “intelligence too cheap to meter” is coming very soon. My current working definition of ASI (artificial super intelligence) is this:
We will have achieved ASI when human intelligence is no longer a constraint on any scientific or economic activity.
Right now, there are only about 8 million PhDs in the world. That’s one doctorate for every 1000 humans. Those numbers are just not good enough. But with the ongoing refinements and advancements in general-purpose AI, we’re about to flip that ratio. That means you’ll have the equivalent of 1000 PhD’s worth of intelligence in your pocket at all times. And that’s everyone.
First Order Impacts
The immediate impact is that all knowledge workers will basically be equally unstoppable, assuming they know how to use these tools. It is not a guarantee that these new tools will be equally adopted, and we’re still seeing “skill issues” as a primary constraint. Silly humans with silly beliefs about AI! Anyone who doesn’t update their beliefs about what AI can do every six months (or less) is working with outdated information. I had to block someone on X—a PhD in computer science no less—who was droning on and on about AI still hallucinates and cannot context switch or detect its own errors like a human can. Not even worth trying to correct a bozo like that, because this train is stopping for no one.
But, assuming we solve the “skill issue” of adoption, it means that:
Software developers will all have expert coders riding shotgun and no software problem is unsolvable or constrained by lack of experience. Heck, you won’t even need to know the language! Just express what you need to the AI and check for integration and that the code runs as expected. Unit tests are your friend!
Medical doctors will have expert diagnosticians to consult with, ready to go at all times. Right now, the biggest constraint here is ego. Human doctors default back to comfortable (incorrect) diagnoses even with the AI explains the correct diagnosis to them!
STEM researchers of all stripes have the best calculator and thinking partner in their pockets at all times. One of my good friends is an oceanographer who does a lot of CFD (computational fluid dynamics) and his work setup now is ChatGPT o1 and MATLAB. I’m not even joking.
Now, you might be thinking “okay great, but unless my doctor starts using these ultra powerful AI’s, it’s not really going to help me is it?” And that’s true. Even if my buddy gets an IQ 2000 helper, no amount of CFD is going make your life any better.
So this leads to the next question I get, which is “Dave, when will this actually change things for me, personally?”
Second Order Impacts
Most people won’t “feel the AGI” for a little while. That is, unless you’re using it to fix your own life. I use AI all day every day, on projects ranging from “make more money” to “write novels” and “fix my health.” With that being said, the general purpose Claude Projects UX is not optimized for any of these activities, but it’s flexible enough for all of them.
But what are the downstream impacts you’re most likely to experience soon?
Job opportunities will start to manifest. We need AI-enabled humans more than ever, and this is one of the biggest constraints to companies adopting AI. It’s only a tiny trickle right now, but eventually we should expect to see “can use AI like an expert” as a top job requirement.
At the same time, we’re going to see an avalanche of job losses coming. Adapt or die. Mark Zuckerberg at Meta has already said they plan on laying off tens of thousands of midgrade devs this year. Upskilling won’t be enough for some, in many cases you’ll need a full pivot.
Cool new apps will also start to materialize. Some of these are direct AI apps, like SUNO that allows you to make music. Other examples are AI-augmented apps, like how Zoom already has AI summarizers (but more is coming). AI-enabled gaming is also coming, with more dynamic stories and characters.
A lot of the improvements will happen in the background. Your tech will get better and more interesting, but that’s more or less what you’ve come to expect over time. The “wow” moment will probably hit most people the first time they talk to a humanoid robot that is clearly functional and more intelligent than your neighbors. That will be the “turning point” for most people, as most humans think in concrete terms—unless you can see it and touch it, it doesn’t exist.
Third Order Impacts
It’s pretty difficult to predict exactly what will happen as superintelligence becomes more embedded in society. But this is where I talk about things like the economic agency paradox and post-labor economics. Technology, historically, has been hugely deflationary, meaning that it lowers the cost of goods and services, which then frees up resources to create entirely new market segments.
That’s fancy economic jargon for “a century ago, we didn’t have internet, game consoles, and smartphones, along with all the goods and services that come with these new modalities. When the world is saturated by ultra intelligent AI and robots, entirely new paradigms open up. If you owned a squad of robots, what could you do with them? Run a small farm? An auto shop? Maybe you’ll buy and restore old Mustangs and Corvettes? These predictions will probably be seen as laughably quaint in just a few years. I mean, just think about all the Popular Mechanics magazines that predicted we’d be flying around with jetpacks and moving around through tubes. The Jetsons didn’t get their ideas from a vacuum, but we are pretty close to Rosie the Robot.
What Won’t Change?
Thermodynamics.
I know that’s a snarky answer, but what I mean by this is “energy and time.” Steel will remain heavy, property in Malibu will remain scarce, and huge experiments and labs will still take a lot of time and money to build. Everything from the JWST to LHC would still take many years and billions of dollars, as “genius level human labor” is not the primary constraint.
Planes and trains won’t go any faster than they do today, and your house will still be somewhat expensive, simply due to its sheer mass. To understand this side of the economic puzzle, I tend to think in first principles.
How much does it weight?
How far does it need to travel?
How much energy is required?
Certainly, in the long run, a hyper abundance of cognitive labor (followed by physical robots) will dramatically reduce the cost of many things. For instance, we might create much, much shorter supply lines, thus reducing the thermodynamic cost of mass and distance. Local manufacturing will make much more sense when all labor is abundant. We’ll totally destroy the need for labor arbitrage. You know how your pears are grown in South America, then shipped to China to be packed, and then shipped back to America? That’s “labor arbitrage” in action. It makes no sense in a future where robots can do the whole thing in one go.
Other things that won’t change too fast: human stupidity and skepticism. Honestly, the Luddites will probably be the biggest bottleneck to change before too long. We’re already seeing this, that despite the fact that ChatGPT is 90% accurate with medical diagnosis (compared to just 70% for human doctors), people are dragging their feet and resisting the change.
This is where tech-first folks get frustrated.
Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law.
Here’s an array of human-centric bottlenecks:
Good old fashioned stupidity. The AI is already smarter than most people (just look at some of the comments on the internet). As they say “Don’t attribute to malice that which is adequately explained by stupidity. People are far dumber than they are mean.” On the internet, I’ve learned that stupidity, misinformation, trolling, and dysregulation are often indistinguishable from one another.
Well-intentioned but misguided skepticism. Some people trust their kneejerk reaction of skepticism to new ideas as though it’s fact. It’s basic chimpanzee-level psychology. “New thing spooky, therefore new thing probably bad.”
Straight up avarice. Yep, human greed will be a major roadblock to progress. Those who benefit from the current status quo will fight to keep it. The powers that be have a somewhat schizophrenic relationship with creative destruction—they love it when it takes other jobs, but not theirs.
Institutional inertia. The much-maligned bureaucracy will gum up the works. I was just buying a new car and even though I told the salesman point blank “I’m not buying extended warranty” he insisted on going through his spiel. Humans are not that bright, not that flexible, and don’t think in systems.
Long story short, while a lot will change, and sometimes in big ways, the more things change, the more they stay the same. Sometimes, by virtue of contrast, we’ll see where the movement really is.
Thank you for explaining this in “orders” David. It really helps to navigate the complexity of the topic
Thank you for listing the human centric bottlenecks; finaly someone in the AI optimists sphere naming the elephant in the room. Now what to do with it: from what I understand you intent to adress those potential problems when they'll eventually rise. Is there really nothing we can do to try and mitigate the Tsunami that's about to hit the global workforce? Well, for people other that ourselves that do see it coming I mean. Because the one thing I got out of this article, is that I better get my anxiety-ridden ass in motion and learn to exploit AI tech as best as I can to benefit from it. Either by directly producing something to sell with it or by consulting for companies who want to integrate AI. Otherwise, there is a good chance that future me will be even more anxiety ridden, but this time under a bridge... Assuming that I'd be still alive in the first place.