2nd Order Thinkers
2nd Order Thinkers.
Is Idiocracy a Snapshot of Humanity in 2035?
0:00
-11:49

Is Idiocracy a Snapshot of Humanity in 2035?

Americans' reasoning was already declining before ChatGPT. Now we're building infrastructure that guarantees it gets worse. The question isn't if—it's how fast.

In 2005, Mike Judge made a comedy about humanity getting progressively stupider. In Idiocracy, an average military librarian with an IQ of 100 named Joe Bauers gets frozen in a government experiment.

When he wakes up 500 years later, he’s the smartest person alive.

Not because he got smarter. Because everyone else forgot how to think.

This is a future where a president is a porn star and wrestling champion.

A Costco the size of a city that requires a tram system to navigate. Law degrees from Costco. Hospitals with automated diagnosis machines that only recognize the barcodes tattooed on patients - no barcode, you don’t exist.

The number one TV show is called “Ow! My Balls!”, thirty minutes of a man getting hit in the testicles. Nothing else. Just impact after impact. In the highest ratings, even someone who has their eyes glued to TikTok will be amazed.

The crops are dying because they’re watered with Brawndo, “The Thirst Mutilator” - a sports drink. Why? “It’s got electrolytes.” What are electrolytes? “It’s what plants crave.” But how do you know that? “Because Brawndo’s got electrolytes.”

The director thought this would take 500 years of differential breeding: smart people having fewer kids while idiots multiplied.

He was wrong about the mechanism; most of us were. It’s not genetics. It’s ChatGPT. And it won’t take 500 years.

It’d just take the next 10-20 years… for a not-even-smart AI to make us dumb.

TLDR: The Questions Nobody’s Asking (as always)

  • Q: If humans + AI barely outperform AI alone, what value are humans actually adding? None. Without intentionally focusing on structured thinking, we’re expensive biological middleware adding 10% value at 1000% the cost.

  • Q: Can you actually tell when you’re cognitively offloading? A: No. MIT professors couldn’t. They insisted they were thinking independently while anchoring every argument to ChatGPT’s output. If experts can’t detect their own offloading, what chance does everyone else have?

  • Q: What happens when the economy restructures around the assumption that humans won’t think? A: OpenAI is worth $157 billion. Microsoft embedded Copilot everywhere. Every workflow now assumes AI assistance. We’re building infrastructure for Idiocracy - try backing out now.

  • Q: Why hold conferences on cognitive fitness among people who already understand its importance? You are right, gym-goers worried about couch potatoes makes little sense. Unless we push the strategic legislation and policies to prevent the Idiocracy from becoming the reality.

  • Q: How fast are we actually becoming Idiocracy? A: The movie thought it would take 500 years of evolution. We did it in 3 years with a subscription service.

The Sharpest Question of All: Q: Are we trying to save people who don’t want to be saved? A: Yes. And which will be the majority… as history reveals. They’ll outvote us in just a matter of time, and ChatGPT’s ads are powerful during NFL prime time.

Don’t want to go down the Idiocracy path? Share this article.

Share

Shall we?


You should know by now that LLM can be incredibly powerful.

And those of you who followed me are aware of this growing worry that they might be doing a little too much of the work for us, and that there’s a hidden price our brains might be paying. Read these for more context:

A few people have asked me if there are any ways to get around it. Before we talk about a proven solution today, let me start by asking you this:

Are your using AI as an assistant to help you think, or are you using it as a substitute for your own thinking?

You might think to yourself,

Of course, it's the first one, I’ll never let AI thinks for me.

Perhaps.

Because, on one side, AI can be this incredible cognitive assistant, it’s your research partner that helps you dig up facts and really deepen your understanding, all while you stay in the driver’s seat.

However, on the other hand, it can become a cognitive substitute, literally doing the reasoning for you.

Can you really tell the difference whenever you interact with your chatbots?

What’s alarming is that the study I’m about to cover today shows that most of us, often without even meaning to, are drifting down that second path.

It can be quite similar to how we use a calculator for math, or a grocery list to remember things at times. But because of how these chatbots are built, it can be a completely different ballgame.

A calculator is never built to get you hooked, but a chatbot is.

Unknowingly, we’re outsourcing the very core stuff, the act of synthesizing complex information, composing an argument based on our very personal experience, and evaluating ideas according to what we believe is right.

I believe you never consciously choose to stop thinking when using AI.

It just happened.

The tool was so convenient that their own mental muscles just faded into the background. And you know, this isn’t just a hunch or a feeling.

And it has been proven again and again.

For example, this MIT study I discussed before.

They used EEG scans, and they could see that the parts of the brain tied to reasoning and memory just weren’t firing as much when people used AI to help them write.

And the latest discovery by Michael Gerlich, a professor I interviewed with, sent me his latest study.


Who Are More Likely to Escape the Thinking Trap?

It’s asking an interesting follow-up question about AI’s impact on critical thinking, potentially, how to effectively reduce the side effects from using AI?

He invited 150 people and split them into four groups. The task for everyone was the same. Write a solid, reasoned argument about the pros and cons of democracy.

  • Group One: They had to do it all on their own. No AI.

  • Group Two: could use ChatGPT however they wanted—a total free-for-all.

  • Group Three: also got to use ChatGPT, but they had to follow a very specific, structured method.

  • Group Four: was just the AI’s own output, which they used as a baseline.

The arguments were scored on five dimensions:

  • clarity,

  • logical coherence,

  • depth of reasoning,

  • recognition of counterarguments,

  • and originality.

Each dimension: 1 to 6 points, and 30 points maximum.

Then they have three PhD experts (political science and education) read every argument blind and score them. Yes, just three people evaluating all 450 responses. (Small rater pool, potential for systematic bias even with blind scoring… anyways)

The researchers start by analyzing these data and identify three predictors that were actually significant, ie, mathematically meaningful, found by this data, but don't worry about it.

What's more important, this is a good enough crystal ball to predict which group in our society is likely to tribute what makes them humans to AI with both hands and which isn’t.

Education was the most obvious one.

Those with bachelor’s degrees or postgraduate qualifications consistently wrote better arguments than those without. Higher education helps you think better, even when AI is doing half the work.

Age also mattered.

Older participants consistently wrote better arguments.

The researchers believe it’s cognitive maturity—older people have better self-regulation and are less likely to be distracted by flashy AI outputs.

Then there’s the weird one: difficulty.

When people said the task felt harder, they actually performed better. Not worse.

The researchers call this “effortful cognitive engagement.” Translation: if you felt like you were working hard, you probably were. Difficulty in the case of working with AI isn’t a sign you’re failing. It’s a sign you’re actually thinking.

The Illusion of Non-Offloading

A side note, they also discovered an illusion of non-offloading.

Even the experts, people who would say, Oh, I’m a critical thinker, were falling into the trap.

They honestly believed they were thinking for themselves, but their work showed they were unconsciously anchoring their own thoughts to whatever the AI produced first.

Get Your Brain on a Treadmill

Now, you are already aware of how AI will impact your critical thinking ability, so you likely won’t be surprised by the result.

Let’s take the last age group as an example.

This reveals that those people who use AI without a structural format ended up only between 8-15% better than having AI work without humans. ie, whatever “human value” wasn’t showing up in the work when not applying structural methodology.

Human insight only appeared when people used AI deliberately, not casually, resulting in a 17-30% increase in score.

Only when thinking deliberately, there’s a meaningful leap in the quality of the output

So what was the secret sauce for that high-performing group?

What was this guided method that produced such incredible results? Well, it’s a five-step process that anyone can learn, and the researchers refer to it as structured prompting.

  1. Initial Reflection. First, you have to think for yourself, come up with your own ideas.

  2. Targeted Research. Then you use the AI, but only as a research assistant to find specific facts. You don’t ask it to write your argument.

  3. Argument Construction

  4. Critical Review. After you build your own case, and only then do you bring the AI back in. This time as a critic. You ask it to poke holes in your logic.

  5. Reflection and Revision. Finally, you make the final changes, so you always have full ownership of the end result.

This whole process keeps you in control, not the AI.

A participant said:

“AI alone feels like fast food. Guided use is more like cooking—more effort, but you actually understand what you’re eating.”

By following this structure, the study achieved a 50%+ improvement in the quality of the thesis produced.

That’s how much better the youngest people in the study, 14 to 18-year-olds, performed when they used that guided method compared to using no AI at all.

Instead of avoiding a problem, asking everyone to go on for an AI cleansing, they developed a powerful tool that can serve as a great equalizer and significantly boost performance.

Don’t hoard knowledge. It isn’t good for your karma.

Share


Is The Society Really Getting Dumber?

Let me point out a few facts.

First, in 2023, researchers examined 13 years of data from 394,378 Americans testing their cognitive abilities between 2006 and 2018.

Across nearly every measure of reasoning—matrix reasoning, verbal logic, mathematical computation—Americans scored worse in 2018 than they did in 2006 (even before ChatGPT). The only cognitive domain that actually improved was spatial reasoning (3D rotation). Everything else declined.

Second, a simple capitalistic incentive conflict. AI companies profit from the frictionless use of these chatbots, which are fundamentally misaligned with cognitive health.

Yes, the study we just reviewed shows structured prompting works.

It dramatically improves reasoning quality and prevents cognitive offloading. But structured prompting requires effort. It requires friction. It requires you to slow down and think harder, not less.

Now, AI companies make money when you use them more, not when you use them better. Their business model depends on you coming back, clicking that chat box, and getting instant answers. The more addictive, the better. The more you treat it like a cognitive vending machine, the better its quarterly earnings look.

This isn’t new.

We’ve seen this exact pattern play out over and over again in capitalist systems like fast food, the corn industry, and alcohol, where what’s profitable is the complete opposite of what’s healthy.

In every single case, the capitalist incentive is to remove friction, to make the product as easy, thoughtless, and enjoyable to consume as possible. And in every single case, that frictionless design causes massive societal harm.

AI is the same story.

OpenAI won’t be interested in embedding this five-step structured prompting protocol as the default. That’s friction.

That’s you using their product less. That’s you staying in control instead of becoming dependent. Their ideal user is someone who can’t write an email anymore without asking ChatGPT to do it first.

Does the structured prompting method work in this study? It’s the equivalent of “cook your own meals,” “read ingredient labels,” or “drink water instead.”

And because there's AI company won't be there to make it easier for you… On top of the first fact, we are already in a decline of our intelligence…

So here’s what you need to ask yourself:

  • Knowing structured prompting requires effort upfront, will you actually do it? Not your kids. Not students. You. Right now, when you open ChatGPT for the next email, document, or analysis, will you go through five deliberate steps?

If you still remember, the study shows that even experts experienced “illusion of non-offloading. If our metacognition is this unreliable, can you ever actually know when you’re thinking for yourself versus when you’re just editing AI output?

The tools aren’t going away. The incentives aren’t changing. So the question becomes, how willing are you to fight against a system that’s designed, from the ground up, to make you stop thinking?

We aren’t done yet.

Let’s go a layer deeper today.


Why We Can’t Actually Mind Our Own Business

I’m invited as a panellist for a conference this weekend: “The Human Factor of AI Implementation.

The title is polite.

What it really means: a room full of people who care about their thinking. We’ll spend Saturday morning debating structured prompting protocols, discussing how to use AI without letting it atrophy your brain.

The audience will nod seriously about the dangers of unguided AI use.

We’re the gym-goers.

Outside that room—and I mean genuinely outside, hundreds of millions of people are humans who are doing exactly what the Idiocracy people did: they’re using ChatGPT because everyone else is, for drafting emails, looking for quick answers, offloading the hard work, without any understanding of what that actually costs them.

The difference between us (gym-goers) and them is that we maintain our “cognitive fitness.” We run our thinking through deliberate processes. We’re aware of the trap.

So what is the point that a bunch of gym-goers are worried about others not wanting to stay healthy?

And why should we be worried about what happens to them?

Because an honest, selfish answer is: we shouldn’t have to care. It’s not your responsibility to fix someone else’s brain atrophy. You maintain your health, they maintain theirs. Done.

Except society doesn’t work that way, and we all know it…

In the UK, we have the ‘free’ NHS.

If you work and pay taxes, you subsidize healthcare for everyone—including people who never exercised, people who smoked for 40 years, people who made every bad health choice available.

You pay for it through taxes.

It’s accepted this because it was decided that healthcare is everyone’s problem.

We can’t just let people suffer… no matter how/ what caused the suffering to start with.

Now apply that to cognition.

What happens when billions of people lose the ability to think critically?

Not eventually. This decade.

What happens to the economy when you can’t find employees who can’t even follow simple instructions? What happens to politics when voters don’t care about or have no ability to care for long-term but immediate joys? What happens to technological improvement when fewer and fewer people know and care to improve and maintain complex systems?

The costs NEVER stop at the individual level. They cascade into your life.

  • The economy contracts. Your retirement fund, your job security, your country’s ability to compete—all of it depends on a functioning society with functioning thinkers. When the majority can’t think, the functioning system breaks.

  • Politically, Democracies require voters who can evaluate policy. When that capacity disappears, demagoguery fills the vacuum. Your one vote means nothing when in front of a majority that doesn’t think. How smart or educated you are is becoming irrelevant.

  • Healthcare. Cognitive decline costs money. Studies show that even early-stage cognitive decline increases healthcare costs by 25-50%. Even though there is NO evidence yet that people who rely on AI will be more likely to get dementia. Though seeing the costs in the UK alone, an estimated £26 billion per year. Now, what if there’s a correlation? An entire generation that could be voluntarily atrophying its cognitive abilities by letting AI do its thinking. Who pays for that burden? The “gym-goers”—you who maintained cognitive fitness—through taxes, through economic stagnation.

2nd Order Thinkers is a reader-supported publication. To support my work, consider becoming a free or paid subscriber.


My Verdict

The world of Idiocracy was supposed to be satirical. Too extreme to actually happen.

We’re walking into it voluntarily.

Just through a different mechanism than the movie imagined.

Unguided AI use produces cognitive offloading without improving reasoning quality. Many users are becoming passive recipients of AI output, losing ownership of their thoughts while believing they’re still thinking critically. The youngest users are the most vulnerable. The economic incentives guarantee it gets worse.

And those of us who are “maintaining cognitive fitness,” who attend conferences and debate structured prompting—we can’t escape the consequences.

Because when society loses the ability to think, everyone pays.

Discussion about this episode

User's avatar