4 NVIDIA Reports. 4 Embarrass Contradictions that Nobody Noticed.
They claim 88% revenue gains. They also admit a third can't measure ROI.
NVIDIA surveyed 3,200 people about AI and published the results across four industry reports.
The headline: 88% of respondents said AI helps increase annual revenue.
That number is technically in the data.
But so is this: a third of the same respondents say they can't clearly measure AI's ROI. They're reporting revenue gains that they admit they can't verify.
Nobody caught it — because you’d have to read all four reports, plus a blog post, and compare every chart. That took me the better part of a week. I found four cracks in NVIDIA’s own data, and four real trends their analysts were too busy hyping to highlight.
But first — what NVIDIA actually means when they say “AI.”
The Definition Tricks That Inflate Every Number
When NVIDIA says “AI” in these reports, they’re not just talking about ChatGPT or Gen‑AI tools.
Your bank’s fraud alert. The Amazon recommendation engine. AI reads your MRI. The system keeps your phone signal stable. All of that is in these reports, counted as AI adoption, alongside ChatGPT and agentic AI.
They're using these decades-old machine learning features to prove that the era of AI revolution is here, and that you should buy more GPUs. So everything that follows sounds more revolutionary than it is.
When these respondents tell NVIDIA that ‘AI increased our revenue,’ it might mean JPMorgan’s fraud model gets better at catching more stolen cards. Or Amazon’s ‘you might also like’ box got better at upselling you.
Not that a chatbot closes deals or an AI agent runs Amazon shops.
On top of that, if you read this chart, you’d think 64% of the participants are now actively using AI.
However, there’s no definition of ‘actively using AI.’ Is someone actively asking ChatGPT to modify their email openings for the last 10 mins also considered actively using?
Because the phrase, “actively using”, is doing enormous work across these four PDFs, and it’s never defined.
Keep that in mind. Now, Crack #1.
Crack #1: The ROI Nobody Can Prove
Exactly the kind of headline again.
Across the four reports, covering over 3,000 respondents, every single industry reported that 80% or more say AI is helping increase revenue.
That’s a wall of green numbers. Looks like the whole economy just figured AI out.
Now read the challenge sections of those same reports.
Roughly 30% of respondents across the survey list “lack of clarity on AI’s ROI” as one of their top barriers.
And we see the same thing reemphasised if we read the telco report. A big number in the telco-report executive summary claims
90% said that AI is helping to increase annual revenue
However! In this exact same report, the challenge of not knowing what to measure is brought up again.
So you’ve got a third of the same people saying two complete opposite things at once:
“AI is increasing our revenue.”
“We don’t know how to measure the impact!”
Which one is it!?
I get it. Nobody who just signed off on a multi‑million‑dollar AI budget is going to tell a survey it was a mistake.
But it gets more interesting.
Crack #2: C-Suites See Gains Nobody Else Can Find
NVIDIA breaks down that revenue number by seniority. Overall, 30% of respondents say AI increased revenue by more than 10%.
But among C-suite and VPs? That jumps to 40%.
So the executives are seeing 10 percentage points more revenue impact than the rest of the organization.
Where is that extra 10% coming from? Is there a secret AI that only runs on the executive floor?
Of course not. And this pattern isn’t unique to NVIDIA’s data.
There is an overall trend of how C-levels report a much higher level of confidence in AI than anyone else in an organization.
Here's data from another independent survey (graph redone by WSJ) that found the same pattern: while near 40% of C-suites believe that AI saves 8+ hours a week, 67% of staff reported saving under 2 hours.
It’s interesting how the people reporting the biggest gains are structurally the furthest from the actual workflows, while also being the most invested in the answer being positive..
I want to hear from you. In your organisation, do the people deciding the AI budget actually use the AI? Drop a comment.
Crack #3: Half the "AI Agent" Headline Is Vendors Being Assessed
NVIDIA says roughly 42–48% of organisations are “using or assessing” agentic AI. But does it mean that these industries all now live the AI future?
Below is a screenshot example from the telco report.
The assessing-vs.-deployed split is the same across all four industry reports. Only a fifth of respondents have actually deployed AI agents. Which means, the “40% plus using AI agent” headline is people still evaluating vendors.
What cracks the report further is the challenge respondents reported.
Of those deployed, using or assessing, a third are telling you: the results drift, the outputs are unpredictable.
Here’s a screenshot of the finance industry report. The number-one AI agent challenge reported is performance reliability issues.
This should speak for itself.
Now, for our next contradiction, let me ask you this question:
If you are a CEO, the measurements are shaky, the executives are the most optimistic, and the agents aren't reliable yet… What are you doing with your AI budgets next year?
Crack #4: Every Industry Found the Bottleneck — Then Spent Elsewhere
Every industry in this survey lists “lack of AI experts” or “Internal skill to monitor AI agents“ as their top barriers. In retail, it jumped from 31% to 46% in a single year and became the number-one thing holding companies back.
And in finance, their second-largest challenge when using AI agents is the lack of internal skills or expertise to manage or monitor them. The same lack of AI talent challenge was also noted in two other reports.
By the way, they never define what an “AI expert” is, same game as “actively using AI.” It could mean a machine learning engineer — or the intern who's good at prompting.
But fine. It’s still clear that talent is the bottleneck.
In that case, wouldn't it only make sense for these companies to put more budget toward training or hiring talent?
I pulled the AI spending priorities from all four reports.
In finance, number one is optimising existing workflows. Healthcare, same thing. Telecom's top priority is “engaging third-party partners to help speed up AI adoption,” rather than hiring AI talent or training internal teams to develop AI skills.
So all four industries diagnosed the same problem: we don’t have the people. And then three of them wrote the same prescription: spend more money on other things.
It’s like going to the doctor, hearing “you need a new heart valve,” and walking out with a prescription for better posture.
What NVIDIA's Analysts Were Too Busy Hyping to Notice
So four contradictions in, it probably sounds like I think these reports are useless.
I don’t.
There are four things in this data that actually matter. They have one thing in common: they’re all boring (in a good way).
Signal 1: Why Every Company Built the Same AI Agent
Here’s a number that kept showing up as I was reading. Across all four industries, roughly 20 to 23% of companies have actually deployed AI agents. Not “assessing.” Not “evaluating vendors.” Deployed. Finance is at 21%. Healthcare, 22. Retail, 20. Telecom, 23.
Four completely unrelated industries. Almost the same deployment rate. My first question was: why?
And then I looked at what they’re actually using agents for, and it suddenly all made sense.
The top use cases are almost identical everywhere. Knowledge management and retrieval show up in all four reports. Internal process optimisation and customer support automation, three of four.
These are horizontal, generic problems.
Every bank, telco, hospital, retailer has documents to search, processes to speed up, and customer tickets to handle.
The convergence tells you most of the industry is: still picking the low-hanging fruit. Solving the same generic problems with different logos on the dashboard.
But buried in these same reports are also a handful of companies that broke away from the pack. And their results look completely different.
Signal 2: Boring Projects, Real Money
When you look at which AI projects are returning real money across these reports, they all have the same shape.
They’re not a company-wide transformation, but one team solving one problem.
In finance, the top ROI use case is document processing. Banks are using AI to chew through loan applications and compliance paperwork, rather than having rooms full of people do it.
In healthcare, more than half of medtech companies say they’re seeing returns from AI in medical imaging, where AI flags an anomaly on a scan so the radiologist knows where to double-check.
In retail, it’s demand forecasting for supply chains. Figuring out how much stock to order so less gets thrown away.
In telecom, it’s network automation, where AI spots a failing tower or a congestion pattern and fixes the routing before a customer notices.
None of these is going to be anyone’s LinkedIn post, writing “we used AI to automate an entire team!” with a fire emoji.
But these are the projects with clear ROI that point back to AI.
And that’s the pattern.
One team picks a real problem they understand well, the data already exists, and the KPI aligns with what they've already been measuring for decades.
AI doesn’t change the measurement. And it shouldn’t have to.
So which companies are still struggling to measure their ROI? It’s usually the opposite: broad rollouts, lots of use cases being “explored,” and the thought that they need an AI-specific KPI.
Signal 3: The Data Wall Was Always There
For a few years, one of the top challenges in these surveys was “we don’t have enough data to train our models.” That number has been dropping. In finance, it went from 49% in 2023 to 16% this year. In retail, 27% to 13% in one year.
Until you see that in finance, data issues around privacy, sovereignty, and data scattered across different systems went up from 33% to 40%. In telecom, data-related challenges — privacy, silos, complexity — jumped from 20% to 54% in a single year.
When you’re running a pilot, you grab a clean dataset from one team and train a model. That works. When you try to deploy across the company, you suddenly need data from ten departments, three countries, and two legacy systems that don’t talk to each other.
They fixed the ‘not enough data’ problem and landed on the ‘can’t actually use the data we have’ problem.
Signal 4: Budgets Are Finally Following Results
One more thing that’s consistent across the reports, and it’s probably the most encouraging.
In previous years, the top spending priority across these industries was “identify new AI use cases.” Basically, try a bunch of stuff, see what sticks. This year, that's no longer number one in any of the four industries.
The clearest example is healthcare.
Last year, “find new use cases” was the top priority at 47%. This year, it dropped to 37%, and “optimise AI workflows and production cycles” took over, jumping from 34% to 47%.
Finance and retail show the same flip.
The companies that found something that works are now putting money into making it run better, cheaper, and faster.
That’s the most rational signal in any of these four reports.
Next time someone quotes “x% ROI seen in AI projects” at you, three questions are worth sending back.
One: What are we building around the model — not just plugging into it?
Two: What specific number are we trying to move, and by how much?
Three: Where do we actually have clean, connected data right now? Not where we wish we had it.
If they can't answer all three, the high ROI percentage from another company means nothing.













