2026-02-20 Latest AI News and Thoughts
Ad vs No Ad; paper banana; 2nd law of thermodynamic on AI models.
Hey,
Here’s me trying something new.
This isn't my typical long analysis piece. If anything, think of it as an email from a friend who reads too much. A list of what I came across during my research, and my quick thoughts, make of it what you will.
Happy Lunar New Year!
Jing Hu
The AI Infrastructure Spending Race
Alphabet announced it’s doubling capital expenditure to $175–185 billion this year for AI infrastructure. For context, their entire business generated $164.7 billion in cash last year. They’ve already taken on more debt and cut share buybacks to fund it. AI spending has now officially exceeded company profit. → Reuters
Then Amazon went bigger. $200 billion in capex for 2026 — 56% more than last year. They generated $140 billion in operating cash in 2025 and spent nearly all of it. This year’s projection will likely exceed what they actually generate, which means borrowing. Investors are nervous, and honestly, that seems like the right reaction. → Reuters
There’s a word for what’s happening here:
A bet. A very, very expensive bet that returns materialise within two years. If they don’t, the pressure is going to be uncomfortable.
The Anthropic vs. OpenAI Drama
Anthropic ran a Super Bowl ad positioning Claude as a thinking tool, explicitly without ads.
Then Sam Altman had thoughts.
He responded that OpenAI prioritises free access to democratise AI, while Anthropic sells expensive products to wealthy customers. He claimed more Texans use free ChatGPT than the total number of US Claude users.
He also said Anthropic blocks competitors from using their coding products, and signed off with:
“This time belongs to the builders, not the people who want to control them.” → CNBC
And then Perplexity joined the war, taking Anthopric's side, and also announced no ad. → The Verge
My quick thoughts:
I always think that Anthropic has an entirely different business model from OpenAI. Probably even Scam Altman has no idea what OpenAI’s business model is, but it seems like he’s trying to do everything. It’s pretty much a second Musk.
For Anthropic, I think this is a really good move because their product proposition is always really clear: it’s for developers and built by the developers.
What’s to my surprise, though, is the Perplexity announcement, because that would be a sensible business model for Perplexity; however, it will be an entirely different story if Perplexity is looking to be acquired.
Perplexity’s Model Council
Perplexity launched something called Model Council, and it’s solving a problem most of us have just quietly accepted: no single model is best at everything. Coding, research, and creative work — they all have different winners. Model Council sends one query to multiple frontier models simultaneously and resolves conflicts or flags where models disagree. → Perplexity Blog
My two pennies:
I knew a lot of people who always copy-paste their prompt across different models. Personally, I have no idea the point of doing it, but hey, now here is a feature to do it for you without you having to do it manually.
PaperBanana
Google published a paper on PaperBanana — a tool for automatically generating publication-ready illustrations for academic studies. If you’ve ever spent an afternoon reformatting figures for a paper, you’ll understand why this exists. → arXiv
A quick one:
To my dear academic friends, I do have quite a few professors subscribed to my newsletter 😁
Anthropic on Scaling and Failure
Anthropic’s latest research claimed that Smarter models don’t necessarily fail more coherently, but worse.
Their finding: longer reasoning sequences consistently increase error incoherence. The failures get more random, not more systematic. Scaling doesn’t fix this. Which means the real risk isn’t a superintelligent model making precise, intentional mistakes — it’s one making chaotic, unpredictable ones. → Anthropic Alignment Science
My two pennies:
Here’s your second law of thermodynamics.
I’m keen for some feedback. Tell me if you enjoy this format as a small piece alongside my weekly long analysis.

Prompting multiple models is basically the Delphi Method applied to LLMs. I do it all the time when accuracy matters and the stakes are high.
Thanks for the tip!
1. Happy New Year! 2. Questions about using multiple AI models: have you or has anyone used respective AI offerings to proofread the output of another? How good are they at proofreading in general?