2nd Order Thinkers
2nd Order Thinkers.
Why Everyone Misread MIT’s 95% AI “Failure” Statistic?
0:00
-27:19

Why Everyone Misread MIT’s 95% AI “Failure” Statistic?

The study doesn’t reveal a crisis—it documents AI’s normal learning curve, the same adoption bottlenecks that shaped PCs and the internet.

You’ve seen this MIT study. Or at least you heard people talk about how 95% of organizations are getting zero return. Your LinkedIn feed is full of people sharing it with knowing nods, saying “I told you so” about the AI hype bubble.

The GenAI Divide STATE OF AI IN BUSINESS 2025

Meanwhile, McKinsey and other consultancies are much more positive about the GenAI usage in the enterprise.

https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

All these statistics are correct.

The issue here is that most people (even some AI celebrities/ scholars) are getting lost in the nuances of economic terms, worse, completely missing the entire point of how technology adoption actually works.

The MIT report measured “intensive margin” adoption. This means full production deployment with measurable KPIs and ROI impact measured six months post-pilot. This is like asking, “How deeply are you using it?

Whereas the McKinsey one measured “extensive margin” adoption, which is when organizations use AI in at least one business function, regardless of scale or measurable impact. Think of this as asking, “Are you using it at all, anywhere?

These aren’t contradictory findings.

They’re documenting different phases of the same inevitable process that has played out with every transformational technology for the last 40 years.

What actually matters is that while you’re debating success rates, marginal economic gains are pushing everyone toward adoption, whether they want it or not.

You need to see past the 95% rate as a warning about AI.

It’s confirming how AI is following the exact same adoption pattern as personal computers and the internet.

I’ve been emphasizing the harms and risks of adopting AI, knowing more than 24.1% of risks from human mistakes, intentional or not. Still, all of us need to adopt AI one way or another… which I’ll explain below.

MIT risk repository

TLDR

  • Q: What did the MIT study really find about AI projects?
    The MIT “GenAI Divide” study found that around 95% of enterprise generative AI pilot projects fail to deliver measurable business results or progress beyond early stages, despite billions invested. Most failures happen because organizations get stuck in endless pilots, struggle with integration, or focus on flashy experiments over deep process change.​

  • Q: Is AI adoption still racing ahead despite these failures?
    Yes. Companies and individuals still embrace AI tools to capture even small productivity gains, even as most major projects stall. The pace of adoption is relentless because marginal benefits, competitive pressure, and network effects drive widespread use, regardless of overall project success rates.​

  • Q: Where are we now in the history of AI adoption compared to other technologies?
    AI is at the pilot phase, like PCs and the internet during their early decades, with high failure rates and immature best practices. These failures are typical for transformative technologies.​

  • Q: Does everyone have a choice about adopting AI?
    No. The landscape is shaped by millions of individual and organizational decisions, creating irreversible competitive dynamics. Even if everyone sees the risks, holding back isn’t viable. The only move is to experiment early, position for future change, and learn from current mistakes.​

Shall we?

See this Subscribe button as a Thank You. Do you say thank you to a waiter? If yes, why not to your writer? 🙂


The Confusion Comes From Measurements

Let me explain how marginal economic gains drive adoption despite project failure rates, because this is where most people get confused.

Just like when you discover that ChatGPT can cut your email writing time in half, you adopt it.

When a marketing team finds that an AI tool can generate first drafts 60% faster than starting from scratch, they adopt it; or when a developer realizes GitHub Copilot reduces coding time by 20%, they adopt it…

None of these adopters cares about their company’s “AI transformation project” success rate.

They care about making their Tuesday afternoon less miserable.

Essentially, McKinsey’s survey counted organizations that “use AI in at least one business function.” They counted the same number when a person in a company used ChatGPT for email drafts as when a company had rebuilt its entire customer service operation around AI.

The aggregate result is the 70%+ adoption rates you read in many consulting firms’ reports. Emerges from millions of individual decisions about small productivity gains, not the same as boardroom strategies about digital transformation.

Meanwhile, the 95% failure rate captures something completely different; they defined success as

  • “Company-wide projects attempting to restructure business processes around AI”,

  • deployment beyond pilot phase with measurable KPIs,” and

  • ROI impact measured six months post-pilot.” Requires deep integration that transforms business processes and delivers quantifiable returns.

Worth mentioning, this is a report with a small sample size, 52 organizations, 153 leaders, four industries—directional, not definitive.

The GenAI Divide STATE OF AI IN BUSINESS 2025

These projects fail because they aim for full production deployment in at least one department across the entire organization. This requires infrastructure, training, process redesign, and cultural change, which no digital transformation project could ever complete within 6 months.

However, for technology to become inevitable, you do not need a deep cooperate integration. Basic usage across many functions, even just like using ChatGPT for a sales pitch, creates competitive pressure that forces deeper adoption over time.

The economic logic is straightforward. Individual shallow adoption has low switching costs and immediate marginal benefits.

While enterprise-level adoption requires significant infrastructure investment, process redesign, and organizational change, most companies can’t execute effectively. I ran a consolidation program across three continents and dozens of countries, just planning and trying to get a go from the C-level team took 6 months…

But even the shallow, individual-level adoption creates competitive pressure that eventually forces enterprise adoption among market leaders, which then creates further pressure for broader intensive adoption.


What History Tells Us About Tech Adoption

There is no need for handwaving or metaphors. Consider what happened with personal computers or the Internet.

No Clear ROI For PC Adoption Before the 1970s’

Using the same standard as this MIT report, most corporate PC implementations in the 1980s failed by any reasonable metric.

Companies spent millions on hardware and software that sat unused, training programs that employees ignored, and ambitious productivity improvement projects that delivered no measurable ROI.

image from BBC news.

So, reading this 1999 paper (Computers and Productivity in the Information Economy), you’d quickly see two things: one, the total business IT equipment nearly doubled from the 1970s to the 1990s.

Second, the manufacturing productivity went sideways after computers hit the scene, even as IT investment exploded.

If you’re using the MIT GenAI study as proof that AI projects “don’t work,” here’s the evidence of why this is perfectly normal.

Yet by 1995, not having PCs was competitively impossible because enough individual workers and departments had found marginal benefits that non-adopters couldn’t compete.

A few interesting comments in this 30-year-old study I’d like to share with you:

These examples serve to illustrate that computers are powerful tools, but they can very easily be used in a less than-optimal manner. By facilitating tasks such as document composition and layout and bibliographic searching for non-specialists, they can undermine the economic efficiencies that result from the pure specialisation of labour

Sounds familiar? Or this, which is more like a universal economic truth. Still, people tried too hard using a computer (in the 90s) or AI (such as agentic) to justify the spending for the sake of chasing the latest technology.

As long as the benefits of additional computer investment exceed the benefits from investing such resources elsewhere, firms should continue to invest in computers. Beyond a certain threshold, benefits will diminish to the point that further computer investment is a losing proposition.

How about this?

The PC era may have initially promised more than it could deliver, to the detriment of computer investment strategies

You’re likely bored with the dotcom example, so let me try something else.

A 2005 MIT study used various technology projects as examples, trying to find out the correlation between successful IT projects and other factors (like other complementary settings available)

The second lesson to be drawn relates to the length of time the process took to work through into the productivity transformation of the late 1990’s. Notwithstanding the rapid fall in computer and ICT prices, the interplay between the competitive and regulatory environment and the successful WalMart strategy took decades to emerge. The bar code patent dates from 1949 but the first retail product was not scanned at a checkout until 1975.

Or if you like, you can find a list of failed and overbudget custom software projects on the Wiki.

Treat the MIT Study as a Historical Document

The real value of MIT’s research isn’t in predicting AI’s future.

It’s evidence that AI’s adoption pattern isn’t much different from every major technology cycle before it, marginal gains visible at the individual level, enterprise success still mostly unmeasured.

Future researchers will probably treat this report as a timestamp, studying how AI moved from experimental to essential, just as historians now study the early struggles of PC and internet adoption.


The Invisible Hand of Marginal Returns, And Its Victims

When a marketing manager uses AI to generate social media posts 40% faster, they’re optimizing for their quarterly performance review. When a software developer uses GitHub Copilot to reduce debugging time, they’re optimizing for project deadlines and personal productivity. When a customer service representative uses AI to draft responses more quickly, they’re optimizing for call volume metrics and work-life balance.

None of these people decided that AI should disrupt entire industries or displace human workers. They made individual decisions about marginal productivity gains.

But the aggregate effect of millions of such decisions creates a systemic transformation that pushes everyone else toward adoption, whether you want it or not.

Network effects amplify this dynamic.

Once enough people in an industry use AI tools, not using them becomes competitively disadvantageous.

When your competitors can produce content faster, respond to customers more quickly, or analyze data more efficiently, you either adopt similar tools or accept a declining market position (you’d be viewed as someone less productive). Subsequently, you will feel pressured to adopt, regardless of a wider project or enterprise’s success rates, or even with minimal ethical considerations.

It turned out that now everyone is a victim in this marginal gain from individual adoption. You and I have no meaningful choice in the transformation, but we all bear the consequences anyway.

One of the biggest impacts seen so far is on the media and software industry.

The GenAI Divide STATE OF AI IN BUSINESS 2025

Do not believe a word of the AI-caused tech layoffs news.

In software, we’ve seen augmentation in engineering coding tasks, but not many CTOs I spoke with managed to actually reduce the number of engineers in a team because of AI.

The efficiency boost IS REAL. However, this is only the case when used by an experienced and logical developer who has their brain turned on when coding.

The irony is that while individual junior or mid-level developers adopt AI tools for marginal productivity gains, many CTOs reported the collective effect creates more oversight burden to filter out low-quality, unsafe code, not less. So you see how easily the marginal benefit at the individual level can turn into systemic cost at the team level.

As AI adoption increased, it was accompanied by an estimated decrease in delivery throughput by 1.5%, and an estimated reduction in delivery stability by 7.2%.— The 2024 DORA report from Google Cloud

The current state of any AI coding tools is like a plumber who knows how to install a new kitchen faucet, but you can’t expect them to design the pipes and prevent leaks for you, not to mention hoping their lack of infrastructure knowledge can prevent your house from drowning during a hurricane.

Yes, developers write more lines of code faster with AI. But generating more code per day is never a KPI!

You want evidence of products releasing more frequently, or evidence of faster customer feedback loops. Unfortunately, you won’t find it just because your team started to code with AI.

Because it doesn’t matter how fast you code if the product isn’t shipped in front of customers. And it especially doesn’t matter if what you built so fast sits unused. The bottleneck in software development was never typing speed. It was deciding what to build, validating it with users, and navigating the organizational friction of actually shipping to production.

AI tools can easily blind you to optimizing for the wrong metric.

They make the easy part (writing code) faster while doing nothing for the hard parts (knowing what to build, ensuring it works in production, getting it in front of users who actually need it). This is the marginal gains trap playing out in real-time: individual developers feel more productive, but organizational output stays flat or even declines under the weight of more code to review, more technical debt to manage, and more features that nobody asked for.

Tech layoffs happen All The Time.

But there are dozens of potential causes—failed projects, restructuring, layoffs in functions related to the engineering team, like scrum, agile coaches, and designers. None of these reasons is because of AI. Yet AI becomes the convenient scapegoat because it sounds more strategic than admitting to poor planning or management failures.

The opposite story in the creative industry.

The story is very different in content creation, if not worse.

Content creation has a very low entry barrier and minimal maintenance costs (unless we’re talking about a million-dollar production). This makes it the perfect case study for how individual marginal gains create systemic losses.

The economic logic is brutal, but inevitable: Why pay a freelance writer $500 for an article when AI can produce similar content for $5?

Individual clients making rational cost decisions create an industry-wide race to the bottom that disincentivizes the high-quality work. Creators who spent years building expertise find themselves competing with tools that have learned from their own published work, such as Sora 2 recreating Marilyn Monroe in The Game of Thrones, NanoBanana generating Simpsons-style cartoons, or simply using ChatGPT to mimic JK Rowling’s writing style.

An image still that was taken from a video generated with the newly-released Sora app and that depicts a woman who looks like Marilyn Monroe riding a dragon.

The casualties are measurable.

For individual publishers, the impact is existential.

The Chancery Lane Project saw its website visits drop by 52% in the first half of 2025 compared to 2024, even as AI bot traffic to scrape its content surged. They’re reaching more people than ever—just not in ways they can measure, monetize, or control.​

Chart: The Chancery Lane ProjectSource: Fathom AnalyticsCreated with Datawrapper

You also see news outlets and publishers are experiencing traffic collapses as AI-generated summaries replace actual clicks:

What’s sad is that content creators can’t simply opt out.

I learned this firsthand when I experimented with blocking AI training on my newsletter.

I once did, because I hated the idea that AI copies and trains on the back of my sweat and tears in all this research and writings, when I got no upside whatsoever. Though one day, I tried using AI for competitive analysis, and I saw it couldn’t find my data and got everything about this newsletter wrong.

That’s when I woke up to the reality that my potential readers would run into the same issue if they ever used AI to index me.

It struck me that refusing to work with AI in 2025 could be akin to refusing to let Google index your website in 2000. So I compromised.

I turned off that toggle. Because I simply want to survive as a writer, I enjoy it too much to forego this opportunity to be discovered just because of pride and mixed feelings about AI.

When you apply this story to millions of other content creators, you’d start to understand the trend of all this.

Individual creators make rational decisions about discoverability. Individual readers make rational decisions about convenience. AI companies make rational decisions about training data. Publishers make rational decisions about staying indexed.

But the aggregate effect is an industry-wide transformation where content gets created, distributed, and consumed in ways that systematically eliminate the people who created the value in the first place.

We are all just casualties of marginal adoption decisions.


Decision-Making in the Age of Inevitable Transformation

You don’t really have a choice here, other than positioning yourself within an inevitable transformation.

This creates a collective action problem.

Even if everyone agreed that slower AI adoption would produce better outcomes for society, no individual actor can afford to voluntarily reduce their competitive position. The first-mover advantage goes to early adopters, and the penalty for late adoption increases over time.

Think about a mid-size marketing agency today.

They know that AI tools reduce employment in their industry and create risks for creative professionals. But they also know that competitors using AI can deliver projects faster and cheaper. If they refuse to adopt AI on ethical grounds, they lose clients to less scrupulous competitors. If they adopt AI, they contribute to industry-wide displacement of human workers.

The agency has no good options because the transformation is already underway. The rational response is to adopt AI tools, and maybe advocating for better regulation and social safety nets… regardless, the adoption part is non-negotiable to remain competitive.

This also explains why concerns about AI safety, job displacement, and social harm have little impact on adoption rates.

Individual actors know about these risks but can’t afford to act on them unilaterally. The result is faster adoption than anyone intended, with slower policy responses than the situation requires.

My advice might feel uncomfortable for some of you. Especially if you are an AI vegan. I believe the best position for you is to prepare for the inevitable transformation. Focus on positioning within the new equilibrium rather than preventing the transition.

You don’t need me to tell you how to develop skills that complement AI rather than compete with it and so on… You have had other consultants nagging at you about this for a while now.

For organizations, it means experimenting with AI applications in low-risk contexts and developing internal capabilities gradually. As you see in this screenshot from the GenAI Divide State of AI in Business 2025 report,

The alternative…

You can wait for better evidence about long-term outcomes, hoping for regulatory solutions, or refusing adoption on ethical grounds… Which I see as economically suicidal in a competitive market.

The transformation will proceed with or without your participation.

The marginal economic gains driving AI adoption will continue regardless. Individual actors will keep adopting AI tools that provide immediate benefits, competitive pressure will force broader adoption, and network effects will make non-adoption increasingly expensive.

Your strategic response should focus on positioning within this inevitable transition rather than fighting it.

Learn from history… Marginal gains compound into transformational change, and early positioning advantages are difficult to overcome later.

Discussion about this episode

User's avatar