The Fastest Ways to Fail at AI
That 80 Million People Are Making Right Now
He writes “AI is bigger than Covid“ and cites no employment data. He says, “Share this before it’s too late,“ by simply broadcasting his own anxiety about his own job at an AI company.
More than 85 million people saw this post. They shared it, endorsed it, lost sleep over it.
All went away with exactly the wrong lessons.
Not because the data is wrong, there was no data to begin with. But because Matt Shumer is very good at telling a scary story, the way he tells it hypnotizes you to believe every single false argument he makes.
So I spent 20 hours cross-referencing this sci-fi he wrote with the latest employment reports and data. I found a very different story from the claims.
If you read his post the way most people did, and have decided to act upon it, you're making the wrong call. You’ll either be cutting people you still depend on, sprinting without a plan, or swapping real strategy for shiny tools.
I'm going to show you a simple demand curve that defuses every single ‘AI is taking over’ claim.
So by the end of this, you can avoid three mistakes that 80 million people are making right now that the data actively argues against.
The First Mistake: Treat AI could do, and AI actually does the same.
I'll show you the curve in a moment.
First, the mistake that most leaders make before they even get to strategy.
Separately from Shumer's post, Anthropic published this report a week later. One of the charts launched hundreds of LinkedIn posts about the countdown of software engineering jobs, confirming Shumer’s points.
But that’s only half the story, as always.
This is the one in the report, and you’ve likely seen it in the last few days.
This blue area is the theoretical AI coverage of 22 jobs. In plain English, what AI could do for each job listed.
For example, LLM could cover 94% of of tasks in Computer & Math (94%) and 90% in Office & Admin.
However, here's what fewer people talked about.
Same chart. Same jobs. Same data. But the other half of the chart is overlooked.
Stop staring at the obvious blue part. Instead, this color change will help you look at the red area and the gap between the red and blue.
Because that’s what's actually happening (or not happening) in workplaces right now.
For instance, Claude currently covers just 33% of tasks in IT. If you still remember that they said AI could already do 94% of the task in IT.
So there’s a 61-point gap between what AI could do and what anyone is actually doing with AI.
I've been tracking this exact question since 2023, based on the same paper (Eloundou et al., 2023) that, in 2026, Anthropic built its framework on. And the finding remains: the gap between theoretical and observed is enormous.
Exactly this gap is all the proof you need to see that there’s something people like Shumer don’t understand or is not in their interest to explain — because if they did, the panic becomes a lot less shareable.
However, it is also exactly where the first mistake gets made.
Many leaders learn that AI could now do 90%+ of the tasks, and pull the trigger to fire people tomorrow.
Remember what’s happened in a lot of “AI transformation” stories since 2023? Leaders rushed into “AI transformations,” cut headcount, then discovered the tools don’t perform as well as the demo. Then, only to scramble months later to rebuild the human capacity they’d just removed.
That’s Mistake 1: treating “AI can do most tasks on paper” as a green light to cut the humans who make those tasks actually work in the real world.
You might think:
“This gap is only temporary“
But no, it isn’t temporary. People’ve been trying to close this AI transformation gap since 2020.
Why AI Has Been "Replacing" Your Job for Six Years and Hasn't
Before we move on to mistake #2, let’s talk about what creates that gap.
Rewind.
In 2020, when GPT-3 first launched, analysts were already writing that it was 'only a matter of time' before AI would replace customer service entirely. That was six years ago. Customer service still isn't fully automated.
The slowest part of any system controls the speed of the whole system.
Say cooking dinner. You can chop vegetables in two minutes, boil pasta in eight, but if the sauce needs 30 minutes to simmer, the meal takes 30 minutes.
And the slowest part in our system isn't the AI.
Let me use this rate-determining step chart to explain the concept further (if you still remember it from your chemistry lesson). An example of how a scientific principle perfectly explains how we run our society.
The first hump is the things that can be done by AI. It clears fast. The second one — that’s the human stuff, and the cost of crossing this hurdle is much higher.
Think about the last time you brought in a new person. The person has an impressive CV and could theoretically handle everything.
But on day one, they don't know which client needs a call before anything moves. They don't know that the legal team takes 3 weeks to review any new vendor. Even by month 3, they still don’t know the workaround for the undocumented legacy system.
The gap between what they could do and what they can actually deliver is two different things.
I want to emphasize this: it isn’t about the task, but everything around it. The messy systems, regulations, bottlenecks, politics, and all the unwritten rules people follow without realizing it.
To expand on this, AI is actually worse than the new hire.
Because AI never fully catches up, since organisations themselves keep changing. New clients, new regulations, new internal politics. The context resets faster than any model can learn it.
So this is why the adoption gap exists.
Mistake 2 is subtler — and it's what happens after you survive the first one.
Mistake 2: Mistaking “more AI” for efficient AI transformation
This is the chart. The Long Tail chart that will help you keep your AI sanity in the next 10 years.
Left side, the head of the curve.
This is where all mass-market software lives. The tools you use every day. Every SaaS company was built to solve the problems in this area.
For example, Salesforce, it exists because millions of sales teams needed CRM. Or Canva exists because hundreds of millions of people want to quickly design something w/o using pro-level tools like Photoshop.
The economics worked because the market is large enough to justify the build.
But inside every one of those companies and inside every company using those tools, there’s a different list.
The internal report that has to go to one client in a specific format, nobody else uses, the approval workflow that’s unique to your compliance setup, or that dashboard your CFO really wants, which is slightly different from the one your BI tool generates.
These are trivial, yes, but they are also the majority of daily work.
These automatable tasks have always been in the backlog.
Not because nobody wanted it done. But simply because the engineering cost never justified the output. Important, yes, but never urgent enough to move to the top. So it just sat there.
AI, makes these tasks visible again and actionable for the first time.
Maybe by the engineers or maybe by the internal users themselves. The demand was always there. The capacity wasn’t.
That’s why Citadel Securities found software engineer postings up 11% year-on-year in Jan 2026.
And why BLS projects AI-exposed roles still growing among the rest, not shrinking.
The backlog is now finally addressable in 2026.
However, here comes Mistake 2: Mistaking “more AI” for efficient AI transformation
When one person can now do what took three people before, you don’t automatically get a leaner organisation. You get a faster one.
A faster car with no direction just gets you killed sooner.
What you’d end up with is two people solving the same problem independently because nobody has defined ownership or internal tools built that contradict the workflow.
Which brings us to Mistake 3, the one that nobody in Shumer's comment section is discussing at all.
Mistake 3: Tool first then problem-solving second.
Now, look at the right side of this curve, the tail.
This is where thousands of real, painful problems have been sitting for decades but no company ever built a suitable solution— not because nobody needed one, but because the market was too small to justify a dev team.
A permit process in California that takes 627 days because one code citation is slightly wrong. A cardiologist in Brussels is watching patients forget half of what he just told them. A road technician in Uganda who physically can’t drive enough roads to assess the damage.
These problems are real.
The people living there know exactly what’s broken. But no VC fund those, because the economics never worked — until now.
AI coding tools change that equation. One person who deeply understands the problem can now build something useful for 200 users and have it actually function.
I wrote about this in detail in my analysis of the Anthropic hackathon.
Mistake 3: Tool first then problem-solving second.
The people who take Shumer’s post seriously aren’t asking:
What problem do I already understand better than anyone and can be resolved by AI?
They’re asking,
What should I do now that I don’t get replaced by AI?
That’s the logic running backwards.
Most people think tool first, then hope a problem worth solving appears.
And that produces a very specific pattern: new AI subscriptions every week, signed up for many prompt engineering courses, and updating the LinkedIn bios to “AI-native.”
Movement that looks like progress but has no direction.
The AI-Could-Do vs. AI-Does Gap Is Your Advantage
That gap between AI-could-do vs AI-does isn’t a warning nor a countdown, as everyone else is saying.
But it is your opportunity and working environment for the next few years.
Let’s recap the three mistakes.
Companies failed at their AI transformation because the edge cases go deep and wide, and the knowledge that broke the system lived in the people they cut. That’s Mistake 1.
AI slows hardest in roles with implicit knowledge, entwine relationships, and unspoken rules. With no clear organizational direction, the acceleration will only kill you faster. That’s Mistake 2.
And if you’re still asking “what tool do I need?” instead of “what problem do I already own?” That’s Mistake 3.
Shumer collapses all the caveats — what AI could do, what’s exposed, what’s deployed, and what happens to jobs — into one biased story for a very narrow outcome, his.
You, however, the person who knows the problem deeply enough to direct the tool accurately isn’t being replaced; the complete opposite.








