Hey,
In the time when everyone is expecting/ avoiding an AI bubble.
I thought the combination of macro and tech understanding would help you decide where your investment (portfolio and business) should go in 2026.
So I begged (literally) Klaas Ardinois to put his knowledge into words.
As some of you might be aware, Klaas is my partner in everything.
He spent 20+ years in engineering and 10+ years as a CTO. Klaas has experienced multiple successes and failures in M&A, not to mention boardroom drama that makes Succession seem like children’s play.
Someone with a deep tech background, plus structural investment knowledge, is rare.
But you are about to read his stuff.
BTW, if you are a CEO who is stuck at your AI endeavor, reach out to Klaas.
Enjoy.
Jing
Do you want AI with that?
A friend of mine here in the UK used to say,
AI is like chips.
It’s a play on “do you want chips with that?”
This seems to be the response in every restaurant, no matter what you order.
Chips here in the UK, of course, are chunky fries. As a born & raised Belgian, I am rightfully offended by those fried abominations.
Anyway, so do you want AI with that?
Everywhere I turn, it seems shoved in my face.
The AI DD™ combines deep learning technology with 7 Motion Direct Drive to analyze fabric types and optimize washing cycles.
Yes...that is really an LG washing machine. Here’s a short about the machine.
I wish I could make this stuff up.
But this isn’t the appliances section of Builder Magazine, so let’s move on.
Before we dive into the investing part, let’s agree on one thing, though.
The value of the technology is not the same as the valuation of companies in the ecosystem. So this is a piece of thinking about Nvidia, market dynamics, and expectations. Regardless of how useful AI (washing machines and otherwise) may end up being.
Your Neighbour Uses ChatGPT. So What?
Before we go any further, let’s get one thing straight.
Your auntie discovered ChatGPT last Christmas. Your neighbour uses it to write emails. Cool. AI is useful.
But that’s not what moves Nvidia’s stock price.
The value of the technology is not the same as the valuation of companies in the ecosystem.
Nvidia’s valuation is built on expectations — specifically, how many AI chips giant companies will buy next year, and the year after that.
So when people ask “is AI a bubble?”, that’s the wrong question. The real question is: will companies keep spending billions on chips at the rate Wall Street expects?
That’s what this piece is about. Nvidia, market dynamics, and expectations. Not whether AI washing machines are useful.
The video version here (will be released on the following Wednesday):
Who’s Who (And Who Owes Whom)
The chip (not the chips for eating… sorry, a dad joke) economy has a few key players. Understanding their relationships is essential to understanding what’s actually happening with demand.
If you are familiar with their relationship, feel free to skip this and jump right to my two theories: bursting or not bursting.
Nvidia Wasn’t Always an AI Company
Nvidia designs the AI chips that power most AI systems today.
They don’t manufacture them (TSMC’s job in Taiwan does). Still, Nvidia’s value in the chip design and, critically, the software ecosystem called CUDA, makes their chips easier to use than alternatives.
Before 2016, Nvidia was a gaming hardware company with a brief detour into bitcoin mining. Investors treated it like one, paying for a stock price that’s between 10x and 20x earnings.
Then, in mid-2016, they released the P100, the first chip built for large-scale AI computation. Everything changed.
Not just revenue.
The valuation itself also shifted.
Suddenly, investors were paying 20x to 55x earnings. That’s a reasonable jump when you’re moving from fickle consumer hardware sales to corporate AI infrastructure investment. More predictable. More sustainable.
Today, we’re near the bottom of that 20x-55x range.
Nvidia’s inventory increased from about 10.1 billion dollars at the start of fiscal 2026 to about 19.8 billion dollars by Q3, nearly doubling.
(Quick note: Nvidia has a fiscal year ending in January. So they’re currently in FY2026 even though we are in 2025. When I say “next year”, I mean FY2027.)
Just so you know, high inventory is more often seen as a red flag than none. Essentially, it’s now taking longer for Nvidia to convert that pile of AI chips to cash. And the stock market thinks Wall Street’s growth forecasts are about two-thirds too optimistic. (I’ll show you the maths shortly.)
Which raises an obvious question:
Is Nvidia still an AI darling commanding a premium? Or has that era ended?
To answer that, you need to understand who’s actually buying these chips, and who owes what to whom.
The Hyperscalers (Amazon, Google, Microsoft, Meta)
These are Nvidia’s biggest customers. Most of Nvidia’s revenue comes from these four.
And let’s face it, they are horrible customers.
Each one of them has a strategic and financial incentive to reduce its reliance on Nvidia chips. They all have (less with Meta, it runs no data centre) the technical capability to reverse engineer Nvidia’s approach or build something better. And they have the financial firepower to sustain the R&D.
I said they’re horrible customers because if you compare them to the average hangry customer at Chipotle. None of those Chipotle customers is trying to reverse-engineer the burrito.
You might argue:
my grandma reverse engineers burritos all the time!
Sure. But she has no other customers than you, does she?
Amazon, Google, Microsoft, and Meta all have their own businesses to run, with perfect reasons to cut Nvidia out and keep the margin for themselves. As someone wise once said: Your margin is my opportunity.
And this is already happening. Google’s TPU (tensor processing unit — a chip specifically built for AI) is gaining real traction. They have a live deal with Anthropic, and there’s talk of Meta adopting TPUs by 2027. That’s one reason Nvidia took a hit recently.
It’s not the end of Nvidia, though.
It’s hard to overstate how much their moat is not just hardware but also CUDA and the toolset around it. Replacing the hardware is only a small part of disconnecting from Nvidia. But it’s a meaningful first step toward independence.
Coreweave (And the Data Centre Bottleneck)
You’ve probably heard of AWS or Azure — big companies rent computing power from them instead of buying their own servers. Coreweave does the same thing, but specifically for AI workloads.
Here’s how it works: Coreweave buys Nvidia chips, builds data centres to house them, and rents out the computing power to companies that need AI capabilities but don’t want to build their own infrastructure.
(You’ll see the term “powered shell” in industry reports — that’s just jargon for a building with a power supply that’s ready to install servers.)
Now here’s an inherent financial inevitability with this business model.
Let’s now compare Coreweave to Chipotle.
Chipotle buys ingredients, makes burritos, sells them, and has cash in the register by the end of the day. The gap between spending money and making money is hours/ days.
Coreweave?
They have to spend billions building data centres — a process that takes years — before they can rent out compute based on hours (of course, no one only rents one hour... the contracts are still mostly years).
Point being, the gap between spending money and making money is measured in years.
So that’s why you see the gap has to be filled with debt. There’s no other way to fund it. So when you read Coreweave’s balance sheet, don’t think “reckless borrowing.” Think “this is the only way this business can exist.”
And this is, in fact, with the support of their clients and partners.
For example:
Nvidia has a financing arrangement with Coreweave.
Nvidia isn’t funding Coreweave to buy more chips, but agrees to purchase Coreweave’s future unused capacity — a bailout (or a safety net if you will) when all fails.
Not much cash changing hands for expansion, just a small equity stake. This mainly gives Coreweave a way to show future revenue on paper, which helps them raise and restructure more debt.
Some Coreweave’s debt and revenue numbers from the last 12 months:
$1bn in interest payments.
$3bn short-term debt principal due.
$9bn short-term liabilities overall.
And $10bn long-term debt behind that.
All of that with 70% of their $4bn current revenue tied to a single customer: Microsoft.
And they’ve reported $22bn of future revenue tied to OpenAI paying them. Given OpenAI’s finances, that’s not exactly risk-free.
Here’s another problem on top of the financial lack of diversity: Coreweave can’t build data centres fast enough.
One of the major blockers in this space isn’t Nvidia, but construction contractors.
There simply aren’t enough buildings ready to put the chips in. Coreweave’s capex got cut from $20-23bn to $12-14bn this year, not because demand fell, but because the buildings weren’t ready.
This matters. I’ll come back to it.
OpenAI (And the Risk That Connects Everyone)
You (should) know OpenAI — they make ChatGPT.
They need enormous computing power to run it and train new models.
They’re not yet profitable. Now, if we also bring the Coreweave back to the picture… Coreweave has two income streams that both trace back to OpenAI:
Microsoft (70% of Coreweave’s current revenue). A reason Microsoft spending so much on AI infrastructure is partly because Microsoft is deeply tied to OpenAI. They run a large chunk of OpenAI’s compute on Azure.
So if OpenAI shrinks or fails, Microsoft’s appetite for AI infrastructure shrinks too.Coreweave to OpenAI directly ($22bn in future revenue, Coreweave is counting on).
So if OpenAI stumbles: Microsoft might cut back on Coreweave (there goes 70% of current revenue), AND OpenAI doesn’t pay up on that $22bn (there goes the future revenue).
Two income streams. One point of failure.
That said, OpenAI has a slight protection mechanism.
Coreweave gave OpenAI $350m worth of shares in a special purpose vehicle that houses the compute capacity Coreweave provides them. And if Coreweave defaults, OpenAI has a claim on those assets.
It’s like renting a commercial kitchen, but the landlord gives you a contract saying, “If I go bankrupt, you get to keep the ovens.” OpenAI made sure that if Coreweave collapses, they don’t lose access to the compute they depend on.
Why is Every Media Now Hype About Busting?
So why does everyone stare at Nvidia when they want to know if AI is a bubble?
Because Nvidia is where the money shows up.
All those hyperscalers spending billions on AI? They’re buying Nvidia chips. Coreweave building data centres? Nvidia chips. OpenAI training the next model? Running on Nvidia chips.
Nvidia sits at the chokepoint. If companies are actually spending on AI infrastructure, it shows up in Nvidia’s revenue. If they’re slowing down, it shows up in Nvidia’s inventory.
That’s why Nvidia’s earnings reports move markets. It’s not really about Nvidia — it’s about what Nvidia’s numbers reveal about everyone else’s spending.
Now that you know the players and their love and hate relationship, let’s look at what’s actually happening with Nvidia’s numbers.
Nvidia’s last quarter showed inventory jumping from $15bn to $19.7bn.
Inventory value has been doubling:
from 2022 to 2024,
Then again, from 2024 to 2025.
Now we’re 3 quarters into FY2026, and it’s doubled again year-on-year.
If you look at Nvidia’s history, that growth rate isn’t unusual for them, even if it raises eyebrows by normal accounting standards.
Part of the explanation: they need to produce more to satisfy demand, nothing ships immediately, so inventory value rises. Also, the chips themselves are getting more expensive, so inventory becomes more valuable even if the unit count stays flat.
That said, there’s a more concerning signal: it’s taking longer to convert that inventory into cash.
Inventory going up is one thing. Inventory going up and taking longer to sell is the classic sign of a company producing into falling demand.
Not a good signal.
Now look at what Wall Street expects for Earnings Per Share (feel free to skip the math and go straight to the next section if you’d prefer):
FY2026 (current year): $4.68 — with $3.16 already booked and $1.52 expected this quarter. At today’s price, that’s a P/E of 37.88.
FY2027 (next year): $7.61 — that’s 62.6% growth. P/E drops to 23.3.
FY2028 (two years out): $9.51 — another 26.3% growth. P/E falls to 18.45.
There’s a tool investors use to check if a growth stock is fairly valued: the PEG ratio. It’s P/E divided by expected growth rate. A PEG of 1 means the market believes the growth forecasts. Below 1 means the stock is undervalued — if those forecasts are right.
Nvidia’s PEG for FY2027 is 0.37 (P/E of 23 divided by 60% growth). For FY2028, it’s still below 1 at 0.70.
A PEG of 1 would mean the market entirely agrees with the consensus forecasts. So if the market price is right and the forecasts are wrong, trading at a P/E of 23 implies 23% growth as “fair.” That’s about a third of what Wall Street expects.
In other words, the market thinks Wall Street is too optimistic by about two-thirds.
That’s a big gap. Quite a few people have pointed to Nvidia’s latest earnings as justification for that skepticism.
So what explains it? Two theories.
Theory A (by Many Media/ Hypers): Demand Is Falling
In traditional business analysis…
Inventory piling up + Slower conversion to cash = Demand is falling.
That’s the textbook read.
If you apply that lens to Nvidia, customers are maybe buying fewer chips than expected, while TSMC is still fulfilling orders Nvidia committed to months ago, but the end demand isn’t there.
And if demand is genuinely falling, Nvidia has a second problem: they can’t adjust quickly. Meanwhile, the hyperscalers aren’t standing still. Google’s TPU is gaining traction. Microsoft, Amazon, and Meta are all investing in alternatives.
So, if demand is down and customers are using this window to diversify away from Nvidia, some of that business isn’t coming back.
This is Theory A: Relying on inventory as a single checkpoint.
Theory B: Demand Is Fine. There’s Just Nowhere to Put the Chips.
Remember Coreweave’s data centre problem?
Here’s what they said about it in their November earnings call:
Full-year capex was previously expected to reach between $20 billion and $23 billion. Due to the delayed data center, 2025’s capex is now expected to be around $12-14bn, a significant reduction.
They further explained:
We expect this reduction in capex from our prior guidance will be mostly reflected by a corresponding increase in construction in progress due to the buildup of infrastructure waiting to be deployed following the delivery of powered shell capacity. As such, the vast majority of the remaining capex we had previously anticipated to land in Q4 will now be recognized in Q1.
And then:
Given the significant growth in our backlog and continued insatiable demand for our cloud services, we expect capex in 2026 to be well in excess of double that of 2025.
In plain English, the issue is NOT that the demand for GPUs disappeared.
It’s that there aren’t enough buildings to put them in.
From this angle, it’s completely normal for Nvidia’s inventory to pile up and take longer to sell. The customers want the chips. They just don’t have anywhere to install them yet.
Satya Nadella said the same thing. Though he’s also motivated to say that, given how much of Microsoft’s business is tied to ongoing AI compute demand. Cui bono.
Why It’s Not About ‘People Still Use ChatGPT’?
Theory B isn’t about end-user demand.
Sorry to inform you that your auntie using ChatGPT doesn’t move the needle.
The real question is whether infrastructure spending has a temporary delay or a permanent ceiling.
So, to make sure we don’t repeat the often-made mistakes the Bubble Busting Forecasters like to make, let’s be clear about what’s NOT the same:
Value of technology ≠ valuation of companies. Your auntie using ChatGPT is not the same as Nvidia’s stock going up. Usefulness doesn’t equal spending at the rate Wall Street expects.
Hyperscalers building alternatives ≠ end of AI growth. Google’s TPU gaining traction means Nvidia might lose market share. It doesn’t mean AI demand is shrinking. The money might just flow to different chips.
Inventory piling up ≠ demand is falling. The same data (inventory up, slower cash conversion) has two interpretations. It could be softening demand. Could be nowhere to put the chips yet.
Coreweave cutting capex ≠ demand falling. They cut from $20-23bn to $12-14bn because buildings weren’t ready, not because they didn’t want to spend.
What the market is really debating is the speed at which capacity can be rolled out, and how much goes to Nvidia versus alternatives like Google’s TPU.
If Coreweave shifts some spending to next quarter because buildings aren’t ready, Nvidia’s revenue shifts with it. Instead of 50% growth next year, maybe it’s slower growth next year, and the slack picks up the year after.
Instead, you have one clear indicator to watch for LLM bubble busting or not: hyperscaler capex commitments.
As long as Amazon, Google, Microsoft, and Meta keep driving data centre buildout, GPU demand stays intact over a reasonable time horizon.
The Verdict
This is all very interesting and exciting, but I’m OK not playing in this space.
Nvidia has a lot of growth expectations baked in, and the big spenders seem to keep their capex commitments intact for now. The timing might be slower, and in the long run, some of that might rotate into different chip architectures like Google’s TPU.
In my book, that makes Nvidia a dubious long — not sure how much upside there is for the risk of a downfall.
But it’s NOT a slam dunk short either. We’ve got Intel for that.
So I’m happy to sit this one out for now.
Maybe if capex plans shift significantly up or down, or the world gets clarity on OpenAI’s or Anthropic’s future as a viable business. Anything that would meaningfully change today’s narrative.
Otherwise, I think this is just playing at the margins, and there are better places to capture upside right now with less drama.














