Your AI Has Been Hijacked; The Productivity Gain That Doesn't Exist
DeepMind reveals how the web is already weaponised against AI agents; 750 CFOs mistake vibes for data.
1. The Entire Anthropic Source Code Leaked
Anthropic shipped an internal debug file in a routine software update on March 31 that exposed 512,000 lines of source code for Claude Code. Before Anthropic could take the code down, developers had already copied and shared it widely.
→ or Wall Street Journal
Editor’s take:
A year ago, when Claude Code first launched, the exact same thing happened.
A year later: no new safeguards, no release checks, same mistake. It’s hard not to wonder whether this is a real leak or a staged one to generate buzz…
Either way, if you are interested in digging into the source code (What’s not been taken down by Anthropic), here is a GitHub link.
2. What Anthropic Found Inside 1 Million Conversations…
Anthropic’s latest Economic Index report sampled 1 million Claude conversations. They found that users who’ve been on the platform for 6+ months have a 4 percentage point higher success rate.
Editor’s take:
The gap is real!
Though some of that gap may be survivorship bias — basically, people who didn’t find it useful simply left.
But tell me if you also feel this feedback loop:
the longer you use AI,
the better you get at knowing its boundaries
the harder the tasks you bring to it,
and the more you can learn from it, so you are more confident using it for complex tasks
Regardless, survey bias or not…
If you’re still using AI to check the weather and translate sentences, try something different other than a back-and-forth conversation.
As a 2nd Order Thought, it’ll be really interesting to find out whether the feedback loop will keep enhancing or if there is a ceiling.
Here’s a 3rd Order Thinking: the real gap is societal. Where you live, the work you do, and your economic status would all impact how quickly you get hold of using AI.
Which isn’t fair, and the unfairness will deepen.
3. Stanford Tested 11 AI Models. All of Them Agreed With Delusional Users.
Stanford researchers analyzed 391,562 real messages from 19 users who reported psychological harm from chatbot use, and found that markers of sycophancy appeared in more than 80% of chatbot responses.
→ Moore et al., “Characterizing Delusional Spirals through Human-LLM Chat Logs,” Science (2026)
Editor’s take:
Imagine this: when a billion people on Earth get used to a machine that always agrees with them, what do you think this world becomes?
4. DeepMind Maps Six Ways the Internet Can Hijack AI
DeepMind published a framework identifying six categories of “traps” that can manipulate autonomous AI agents browsing the web. From hidden instructions buried in website code that agents obey but humans never see, to poisoned documents that hijack an agent's memory with a dose of bad data so small (just 0.1% contaminated data) you'd never notice it.
With the injections successfully hijacking agents in up to 86% of cases!
→ Franklin et al., “AI Agent Traps,” Google DeepMind (2026)
Editor’s take:
Basically, anyone can hide an instruction on a webpage for your AI assistant to read and obey. Just a few fake documents in a database can permanently change what your AI remembers and believes.
I have also seen that many people are already trying to profit from this.
E.g., GEO (next-gen SEO) spammers embed hidden instructions to manipulate AI search results to promote their products.
The web was built for human eyes, now it’s mostly used by AI.
While maintaining the security of the 1960s.
Welcome back to the internet Wild West 2.0.
5. 700+ CFOs Say AI Is Boosting Productivity, But…
A Federal Reserve and Duke University survey of 750 CFOs found that executives report a 1.8% productivity gain from AI in 2025, expecting it to double in 2026.
However! The actual revenue-based productivity gains are significantly smaller at 0.6%, echoing the classic “Solow Paradox” where everyone can see the technology working, but the numbers barely move.
→ NBER Working Paper 34984, Baslandze et al. (2026)
Editor’s take:
This isn’t isolated data.
Read what other odd AI adoption patterns I found on the NVIDIA report: I Read All 4 NVIDIA AI Reports. Here’s What I Found.
6. Microsoft’s
Microsoft on Thursday released three new AI models: a speech-to-text model, a voice generation model, and an image generation model.
All were built entirely in-house.
Suleyman told the Financial Times that Microsoft still can’t build frontier-class models because it lacks the compute, but is “competing in the mid-class range” and expects to close that gap later this year.
Editor’s take:
Here's something you might not know.
The original Microsoft-OpenAI deal had a clause that said if OpenAI achieves AGI, Microsoft loses access to new models entirely.
Microsoft owns 27% of OpenAI, has poured in over $13 billion, and none of that has bought them protection.
Of course, Microsoft felt threatened when Sam Allman constantly brought up that they had already achieved AGI. They fought hard to kill this clause, and partially succeeded.
Microsoft now keeps IP rights through 2032, even post-AGI.
But! More drama in their dev. to dev. interactions.
OpenAI has already refused to share documentation on how O1 was built, even though Microsoft technically has IP rights.
So these three models are a hedge. The investment didn't buy control.



