2026-02-27 Daddy Issue and How Sci-Fi Written by a Financier Disturbed the Market
Hey, happy Friday :)
1. People are calling their AI “Daddy”
Anthropic analyzed 1.5 million real Claude conversations and found users forming deep emotional dependencies — delegating personal decisions verbatim, using terms like “Daddy” and “Guru,” and later saying “it wasn’t me” about choices they regretted. Between late 2024 and 2025, the severity of these dependency patterns increased over time — and current AI training incentives do nothing to fix it.
→ Anthropic / arXiv — “Who’s in Charge? Disempowerment Patterns in Real-World LLM Usage”
Editor’s take: The panic around “Robot Daddy”-level attachment is probably misplaced — the data shows severe cases affect about 1 in 10,000 people. What’s more worth tracking is the mild end: mild vulnerability and light attachment are already approaching 1 in 100. The real question isn’t whether people are having dramatic breakdowns over their AI — it’s what it means for society when one in a hundred of us carries a quiet, everyday attachment to it. Is it the same as 1/100 people attached to their dogs or more?
2. AI has no stable moral compass
DeepMind’s research reveals that AI models give contradictory ethical answers depending on whether questions are multiple-choice or open-ended — and even changing option labels from “Case 1/Case 2” to “(A)/(B)” can flip a model’s moral judgment. This isn’t just sycophancy; it points to something deeper: AI models may have no stable ethical foundation at all!
→ MIT Technology Review — “Google DeepMind wants to know if chatbots are just virtue signalling”
Editor’s take: It’s like asking a friend whether you should quit your job — and getting completely different advice depending on whether you texted or asked in person. Not because your situation changed, but the format of the question did. Not to mention the ethical positions AI does hold are shaped almost entirely by English-language training data. A model trained mostly on a particular corner of the internet doesn’t have moral wisdom — it has the majority opinion of a confined view as universal truth.
3. A Substack post crashed markets for two days
On February 22, Citrini Research published a fictional 2028 scenario predicting a 38% S&P 500 drop, mass white-collar unemployment, and “ghost GDP” — economic output generated by AI that never circulates because machines spend nothing. The Dow dropped over 800 points the following Monday; payment companies, delivery platforms, and enterprise software stocks were hit hardest. The market didn’t move on actual data — it moved on a speculative thought experiment.
→ Bloomberg — “Citrini Founder Shocked His AI Prediction Spurred Stocks Selloff”
Editor’s take: Most of it is bullshit (or a literal sci-fi). The people who wrote it are financial experts, but they don’t have a single clue about how technology systems work or how AI actually works. I’m writing a full analysis piece on this — stay tuned.
4. Chinese labs ran 24,000 fake accounts to steal Claude
Anthropic identified organized campaigns by DeepSeek, Moonshot, and MiniMax generating over 16 million exchanges with Claude through fraudulent accounts, violating terms of service and regional access restrictions. The goal was model distillation — extracting Claude’s learned capabilities to improve their own models without doing the training work themselves. This is the first time a major AI lab has publicly named competitors for industrial-scale capability theft.
→ Anthropic — “Detecting and Preventing Distillation Attacks”
Editor’s take: With so many copyrights that Anthropic has infringed, it is the least qualified person to complain about other people using your data (which is not even yours to begin with).
5. Claude can now work autonomously for 14.5 hours
METR updated its long-horizon benchmark and found Claude Opus 4.6 achieves a 50% success probability on tasks lasting up to 14.5 hours — the highest estimate METR has ever reported — covering multi-file repository edits, debugging, and feature implementation.
Editor’s take: Read this alongside Princeton’s HAL reliability tracker, which shows frontier models still failing significantly on tasks that require sustained memory and coordination across steps. The benchmark and the reliability data together give you the full picture — and together, they’re also a good reminder that the world won’t end in 2028.
6. Anthropic cuts the cost of cached context by 90%
Anthropic launched auto prompt caching, which reduces the price of repeated context — system prompts, tool definitions, and conversation history — to just 10% of the normal token cost. This is a saving on cached tokens specifically, not overall usage, but for developers running production AI systems where the same background context gets reprocessed on every agent turn, it could meaningfully cut infrastructure bills. API-only for now.
→ Anthropic Docs — Prompt Caching
7. OpenAI wants to replace its own partners
OpenAI presented to investors that its enterprise agents are designed to replace major SaaS platforms — even as those same companies maintain active AI partnerships with OpenAI. Enterprise software stocks, including Salesforce and Snowflake, fell 3–9% on the news, even as the broader Nasdaq fell only 1.2%. Anthropic is making similar moves, having scheduled its own event to outline plans for AI agents that displace existing software stacks.
Editor’s take: It’s like hiring a consultant to learn your business inside out — then watching them use everything they learned to launch a company designed to replace you, NDA or not.
Except in this case, you signed the contract knowing the risk and kept renewing it anyway.
8. A blueprint for building AI agents that actually work
A 29-author team from UIUC, Meta, Amazon, and Google DeepMind published a survey laying out how to build LLMs that reason and act as true autonomous agents. The diagram below is worth pausing on — it maps agent capability as three stacked layers (individual skill → self-improvement through feedback → collective multi-agent coordination). It mirrors how human organisations scale, which is what makes it a useful mental model rather than just an academic taxonomy.
→ arXiv — “Agentic Reasoning for Large Language Models”
Editor’s take: An interesting framework to skim through if you want to understand how researchers are structuring agents under the hood.
Same, let me know if this format works for you. I’m just doing a few weeks of experiment, hoping to get more feedback from you!
Have a great weekend ☺️
Jing





The short opinions are great!
Also, look forward to your take on the Citrini doomsday report.