The AI Expectation Gap: Why Silicon Valley and Main Street See Two Different Realities
You’ve probably seen the headlines or heard the podcast clips. Influential voices in the AI world, like Andrej Karpathy, have recently made waves with claims that AGI (Artificial General Intelligence) is at least a decade away and that current “agentic” capabilities don’t match the soaring hype of industry expectations.1
When figures like Karpathy or Sam Altman speak, the public conversation—the “social stock market” of ideas—reacts wildly. It’s a powerful effect.
But based on what I am seeing—along with my colleagues, peers, and clients—this narrative gets things almost exactly backward.
The problem isn’t that AI is under-delivering on hype. The problem is that, for most of the “real world” industry, their expectations are still far below what AI is capable of today.
The Two Expectation Gaps
When an AI researcher at the cutting edge says AI “isn’t meeting expectations,” they’re operating from a specific vantage point. Their expectations are stratospheric. They’re thinking, “It hasn’t fully cured cancer yet,” or “It hasn’t solved all of physics.” From this Silicon Valley perspective, where the goal is a god-like AGI, today’s reality naturally falls short.
But for the Fortune 500, government agencies, non-profits, and other large organizations, the situation is inverted.
Their expectations are often shockingly low, anchored by what they saw two years ago, fears of “hallucinations,” or internal memos from a legal department that is still playing catch-up.
Meanwhile, the actual capabilities of today’s models are leaping further ahead of their low expectations every single day. There is a huge, growing gap between what most organizations think AI can do and what it actually can do.
The Asymmetric Reality of Adoption
We are at a phase of deeply skewed, asymmetric adoption within these organizations.
Sure, in any given company, you have a few power users who are subscribed to every tool and are using AI all day, every day. But as I mentioned in the video, most employees who have been given a tool like Microsoft Copilot are still just saying, “Oh yeah, I used it to help write an email.” That’s as far as they’ve gotten.
In that same organization, you have employees who haven’t heard of it, haven’t touched it, or are actively avoiding it.
We’re in the “CrackBerry” era of AI. We’re at that point where smartphones were an interesting, addictive toy for a few executives, but not yet the mandatory, strategic, ubiquitous tool they are today. That transition from “nice to have” to “mandatory” took about a decade for mobile. AI won’t take that long.
It’s Not the Tech, It’s the Culture
So why is adoption so slow if the technology is already exceeding low expectations? The friction isn’t technological; it’s human and cultural.
1. Tool Fatigue: People are tired. As I said in the video, I’ve been at companies where people were bent out of shape over migrating from Skype to Teams. They see AI as just “another tool” you’re throwing on the pile. When you consider that the average large company can have over 1,000 internal applications they support, you can understand the fatigue.
2. The Missing “Blessing”: Even when employees have access to powerful tools, they are often afraid to use them. They’re waiting for the formal “go-ahead.” They need a town hall where Legal, HR, and InfoSec “make the sign of the cross,” anoint the tool, and give it their blessing. Until AI is formally consecrated as “safe” and “approved,” most employees will keep their distance, fearing they’ll be the one to do something boneheaded and get in trouble.
Why “AGI” is a Useless Term for Business
This brings me to the crux of the argument and the term “AGI” itself. For any practical business, military, or government application, “AGI” is an utterly meaningless term.
It’s an imaginary label, a “figment of the imagination” that is different in everyone’s head. It’s a mental archetype, a Rorschach test for our hopes and fears about a “maximally potent machine.” As I said, people react to the idea of AGI the same way they react to the idea of a literal deity—some see a savior, others see a vindictive god.
This “science fiction” label has no business value. It’s a dangerous distraction that causes leaders to miss the forest for the trees.
Focus on Capability, Not Labels
The leaders and organizations I work with don’t care about the label. They shouldn’t. The only questions that matter are:
What can this technology actually do today?
What is its objective rate of improvement?
The bottom line is whether a tool is better, faster, cheaper, or safer than a human at a given task. It’s already achieving that on many fronts, and the academic studies that “prove” it can’t do something are often using models that are two years old—an eternity in this field.
So when you hear claims that AGI is a decade away, my response is: it doesn’t matter.
Look at the capabilities in front of you. Look at the rate of change. The gap between what’s possible today and what your organization is actually doing is massive. That’s the gap worth focusing on.









