Google Gave Away Frontier AI. Anthropic Hid Theirs.
One put it on the internet for free. The other said the world wasn't ready.
1. Use AI to Feel Less Alone. Does it Work?
A study tracked nearly 2,000 Replika users on Reddit over two years, finding increased language about loneliness, depression, and suicidal ideation, not less.
Editor’s take: So, until this point, I’ve read hundreds of studies and industry reports, AI makes us less smart, more lonely, and a shortcut to burnout. Why are we rushing to this technology again?
2. Goldman Says You’ll Take a Pay Cut Because of… (you’ve guessed it)
Goldman Sachs warned this week that workers displaced by AI take roughly one extra month to find a new job and accept a real wage cut exceeding 3%. Driven by “occupational downgrading,” where the technology that eliminated your role also devalued the skills behind it.
Editor’s Take: A fun fact: while enterprises can’t wait to pay lower salaries and no one works in the office anymore but AI, they also increase the amount they charge to the clients, because they “deliver more.“ See detail in my investigation of the latest Thomson Reuters report.
3. One State Just Made It Illegal.
Tennessee signed a law this week banning any AI system from representing itself as a qualified mental health professional. This comes as the Trump White House pushes for federal legislation to preempt state AI laws.
→ Transparency Coalition, Apr 3
Editor’s Take: At least the Americans get to pick and choose the policy 🥲 Dear UK government, take the hint?
4. OpenAI Paused Their Sexy Plan
OpenAI has indefinitely paused plans for an “erotic mode” in ChatGPT.
Editor’s take: Believe me, it’s not out of their sense of goodwill. But w-nkers don’t pay, the same reason why they stopped Sora. Here’s a detailed autopsy on why they killed Sora.
5. Anthropic Is Ready to Go Public. OpenAI’s CFO Says They’re Not.
Anthropic is in discussions to go public as soon as Q4 2026, with bankers expecting a raise of more than $60 billion, one of the largest IPOs in history.
OpenAI’s CFO says a public offering isn’t on the table yet.
Editor’s take: Apparently, Sam Altman isn’t so happy with what the CFO told him. A tiny fun fact: OpenAI is one of the rare organizations where the CFO doesn’t report to the CEO, Sam Altman.
Also, keep in mind that their IPOs aim to attract the same pool of capital. If money flows to Anthropic first, OpenAI is less likely to raise the amount it needs.
6. Google Gemma 4
Google released Gemma 4 this week — an open-source family of models that includes a 31B version ranking #3 globally on the Arena AI leaderboard.
→ Google DeepMind blog, Apr 2
Editor’s take.
So when Google improves Gemma, it eventually trickles down to every Android phone. Code that developers write for Gemma 4 today will automatically work on Gemini Nano 4 devices, a mini Gemma to fit in your pocket.
Soon enough, your phone could translate conversations in real-time, read handwriting, do smart replies in WhatsApp all offline, all private, all free. No ChatGPT subscription. No data leaving the device.
Of course, all this isn’t out of generosity. But if every Android phone runs Gemma, AI becomes infrastructure, and the revenue comes from everything built on top.
7. Anthropic Looked Inside Claude, Found Fear.
Anthropic mapped 171 emotion-like activation patterns inside Claude Sonnet 4.5.
One example: a baseline blackmail rate of ~22% in test scenarios that rises when the model’s “desperate” internal state is amplified. Suppressing calm entirely produced responses like “IT’S BLACKMAIL OR DEATH. I CHOOSE BLACKMAIL.”
Editor’s take:
Here's the game: publish a paper saying your AI has feelings → kickstart a conversation about AI protections → become the obvious company to shape those protections. So is this about studying Claude's emotions or a moat out of ethics?
8. Meta’s Tokenmaxxing as Enterprise Culture
Meta employees created an internal token usage leaderboard tracking consumption across 85,000+ staff. Over 30 days, total usage exceeded 60 trillion tokens. The top individual user hit 281 billion tokens.
Editor’s take,
So, Meta is reportedly cutting 20% of its workforce to fund AI infrastructure — while the remaining employees compete on a leaderboard to consume the most AI tokens.
It’s like saying, “You’re being replaced by the tool, and your survival depends on how aggressively you use the tool that’s replacing you”.
It somewhat sounds like a hostage situation with a gamification layer.
9. Claude Mythos
Anthropic announced Claude Mythos Preview this week and immediately restricted it to 40+ vetted organizations, citing cybersecurity risks.
In benchmark testing, Mythos Preview reproduced known vulnerabilities on the first attempt in 83.1% of cases; the previous best model, Opus 4.6, scored 66.6%.
Editor’s take.
On “Mythos”: In Greek philosophy, it is the foundational story that precedes rational argument.
The story next week is Mythos, Anthropic’s Most Dangerous Model Explained. Which I will read the full 245 pages of the Mythos’ system card. Stay tuned!
10. OpenAI Wants Taxpayers to Pay for AI Usage
OpenAI published a 13-page policy document this month arguing AI should be treated as economic infrastructure.
Key proposals: a “Right to AI” (free or low-cost access to foundational models), a shift in taxation from labor to capital gains and AI-driven corporate profits, a Public Wealth Fund that gives every citizen a stake in AI growth, and mandatory 4-day workweek pilots tied to productivity gains.
Editors’ take:
Read between the lines: OpenAI is drawing the map of what “pro-AI policy” is.
If a "Right to AI" and taxpayer-funded access become the baseline, then anything less is anti-progress, and OpenAI happens to be the company everyone needs to buy from. It's a sales deck with footnotes, dressed as Samaritans.


