David Shapiro’s Substack
David Shapiro
Why AI adoption is slower than we would like
0:00
-23:47

Why AI adoption is slower than we would like

Front line conversations and industry research

The Great AI Paradox: Why Is Generative AI Adoption Both Too Fast and Too Slow?

Across industries and nations, a strange paradox is defining 2025.

Business leaders—from the C-suite to the board—all seem to agree: generative AI is non-negotiable. They believe it is a key determinant for their company’s 10-year viability. They know that being asleep at the wheel means they are going to lose the farm over the next decade.

And yet, in the same breath, a vast majority—more than 80%—of these same leaders report seeing no significant bottom-line impact from their current AI investments.

This is the generative AI paradox. We see layoffs in HR and other sectors, partly driven by economic fears, but also by strategic plans to automate. We hear of massive leaked goals from companies like Amazon to slash workforce needs with AI and robotics.

The belief is there. The technology is here. So why is this transformation taking so much longer than everyone would like?

For anyone in tech, this pattern is familiar. We saw it with the personal computer, the internet, mobile, and the cloud. New technologies always have false starts before they suddenly become the default. Right now, we are in that “false start” phase, asking: How do we actually use this, and how do we prove the ROI?

Based on discussions with business leaders and deep research, the problem boils down to a few key bottlenecks. Here are the eight points every decision-maker needs to understand.


1. The Paradox: We Believe in AI, But We See No Impact

The core of the problem is this massive cognitive dissonance. The C-suite and board of directors are convinced AI is the future, but they aren’t realizing any discernible impact on the bottom line today. This disconnect is creating frustration and stalling momentum, even for the internal champions like CIOs and CTOs who see the value.

2. The Real Bottleneck: Measurement, Not Technology

The single biggest obstacle isn’t the AI itself. It’s the measurement crisis. We are facing a paralysis in measurement, a profound difficulty in estimating and demonstrating the value of our AI initiatives. How do you truly measure what this new tool is doing?

The problem is so pronounced that research firm Gartner predicts that 30% of generative AI projects will be abandoned simply because there is “unclear business value”.

Imagine trying to justify the ROI of email in 1995. What’s the business value of chat? At some point, these tools become an expected cost of doing business. This is the great fear: that AI will just become another infrastructure cost center we all have to pay for, like Office 365, rather than a true center of excellence that drives value.

3. The Horizontal vs. Vertical Trap

When companies do try to get value, they often fall into one of two traps, paralyzing their ROI.

The first is the Horizontal “Peanut Butter” Approach. This is where you just “give everyone in the organization a chatbot” and hope for the best. You roll out a Co-pilot to every employee and tell them to “go forth and be productive”. Studies show this can save individuals 20 to 95 minutes a day. But are they reinvesting that saved time into value-added work, or are they just... doing nothing with that extra time? This approach spreads value so thinly across thousands of micro-tasks that it doesn’t impact a single, major value stream. The law of constraints says that unless you improve efficiency at a core bottleneck, the overall system throughput doesn’t improve.

The second is the Vertical “Spear” Approach. This is the opposite. It’s an expert agent or bespoke project designed to replace or transform one specific, core business process. These projects have very clear ROI and defined scopes. The problem? An estimated 90% of these high-value projects get stuck in pilot mode. They fail to scale because they can’t handle real-world exceptions or they end up creating more validation work downstream.

4. The “Belief-Changing” Win Comes from Spears, Not Peanut Butter

So, how do you move the needle? The data shows belief-changing wins come from “spears,” not peanut butter. A belief-changing win is that “splash of cold water on the face” moment where leadership says, “Okay, I get it. What’s next?”.

This isn’t a “moonshot” to solve every problem. It’s a single, highly targeted spear attack that automates a high-cost, clearly understood internal bottleneck.

We’ve seen two perfect examples of this. JPMorgan Chase’s COIN, which stands for Contract Intelligence, didn’t try to reinvent finance. It automated one incredibly tedious task: reviewing complex legal contracts. This single platform saved an estimated 360,000 work-hours annually. That is a win that changes beliefs.

Similarly, Novo Nordisk’s NorvoScribe targeted another painful bottleneck: creating regulatory reports for clinical trials. Their AI tool cut a process that routinely took 12 weeks per report down to just 10 minutes.

This is often confusing for individuals who are seeing massive gains. As a professional, I’m 10 times more productive because I bounce between four different AI subscriptions all day. But not every role is like that, and not every person has the time or permission to optimize their own workflow. This is why simple documentation and “lunch and learn” sessions are critical; people are afraid to use tools when they don’t know what they’re allowed to do.

5. “Time Saved” Is a Vanity Metric. The Real KPI Is “Capacity Captured.”

For decades, automation engineers have used “time saved” as their primary metric. This research shows that’s a mistake.

“Time saved” is a vanity metric.

If an employee does eight hours of work in one hour, then scrolls Reddit for the next seven, did the business gain anything? No. That time wasn’t reinvested.

The true metric is “realized hours” or “capacity captured”. You shouldn’t be asking, “How many hours did we save?” You should be asking, “How many effective hours of work are we getting done?”.

Imagine you have 10 employees working 8-hour days. Your baseline is 80 hours of labor per week. Your goal with AI isn’t to get that 80 hours of work done in 40. Your goal is to get 200, 400, or even 800 effective hours of work out of those 10 employees in their 40-hour week. That is the metric that will change the bottom line.

6. AI Is a “Bottleneck-Shifting Machine”

According to the Theory of Constraints, any improvement not made at the system’s core bottleneck will not improve the system’s overall throughput. Furthermore, when you do alleviate one bottleneck, the pressure just shifts to the next one.

AI is a bottleneck-shifting machine.

A brilliant example is GitHub Copilot. It made developers 55% faster at writing code. A huge win, right? Wrong. The bottleneck just shifted. Because developers were pumping out so much more code, code review times spiked by 91%. Now you need AI to do the reviews, too.

The path to ROI isn’t a single automation; it’s a relentless game of whack-a-mole. Even with JP Morgan’s COIN, processing infinite contracts just moves the bottleneck somewhere else. You have to find and eliminate one constraint after the next.

7. The Most Defensible ROI Might Be Human-Centric

This may be hard for finance-minded leaders to hear, but the most defensible ROI might not be about time or money at all. It might be human-centric.

Now, I personally disagree that this should be the primary focus, but its value is undeniable. Instead of only looking at the top or bottom line, look at factors like developer satisfaction, burnout levels, reported “flow states,” and talent retention.

Your human capital is incredibly valuable. If these AI tools make your employees’ lives better, reduce their burnout, and make them happier to come to work, that is a massive win. As someone who has experienced burnout, I can tell you that if I were still in infrastructure automation today with these AI tools, I would be having so much fun I’d probably never have left. Retention is a definitive signal that most organizations take very seriously.

8. Governance Is the Real Gatekeeper

Finally, the biggest bottleneck in your organization probably isn’t the CIO or the CEO. It’s the CFO, the Chief Legal Officer, and the Chief HR Officer. Governance is the primary gatekeeper to realizing AI’s value.

The CFO says, “Show me the money. Prove the ROI”. The Chief Legal Officer and CHRO say, “Show me this is legal, compliant, and ethical”.

It’s tempting to frame them as the “bad guys” slowing down progress. I don’t agree with that. They are doing their jobs. It is literally their job to ensure that everything is done correctly and with the utmost discretion.

This tension is healthy. It forces the tech evangelists to go back to the drawing board, move past the hype, and build solutions that are not just innovative, but also provably valuable, legal, and ethical. That is how you find the real, sustainable wins.


I hope these insights are both valuable and illuminating to you.

Discussion about this episode

User's avatar

Ready for more?