2nd Order Thinkers
2nd Order Thinkers.
Who Loves AI Products?
1
0:00
-21:43

Who Loves AI Products?

The strange psychology behind our AI obsession. Hint, enthusiasm thrives where understanding ends.
1

Just like watching a magician pull a coin from thin air.

If you have no clue how the trick works, you’re dazzled.

But if you’ve seen behind the curtain, learned the sleight-of-hand, you’d realize there’s no magic at all.

Something similar is happening with AI.

A recent report called Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity, Researchers Stephanie Tully, Chiara Longoni, and Robert Appel conducted six studies to probe this counterintuitive link. Across these experiments, the trend was strikingly consistent: less “AI literate” people were more eager to use AI than their more knowledgeable counterparts.

In other words, when AI is indistinguishable from magic, people embrace it, and once it becomes an explainable tool, some of the awe disappears.

I call it the AI Enthusiasm Paradox.

TL;DR

  • Why are people with less AI knowledge more excited about using AI?
    Because multiple studies found a strong negative correlation: the less people know, the more enthusiastic they are, driven by “magical” perceptions of AI.

  • Does this happen everywhere or just in the US? Lower the literacy rate, the higher the level of AI adoption.

  • Will knowing more about AI make me want to use it more?
    No, college students and professionals who scored higher on AI knowledge tests were less likely to want AI to do tasks for them, even for creative jobs.

  • Do people actually use AI more if they know less about it?
    Yes, in a US sample, people with low AI literacy reported using AI tools (like ChatGPT) more often than high-literacy peers.

  • Is the “magic” factor real or just a metaphor?
    It’s real and measurable. Seeing AI as “magical” statistically explains most of the knowledge-enthusiasm gap.

  • Does this mean people with less knowledge are naïve about risks?
    No, in the studies, those with lower literacy were actually more fearful about AI’s impact, but still more eager to use it.

  • Should I become a coder to be AI literate?
    No, true AI literacy is about having a broad concept of how AI works, its risks, and your responsibility, not just technical skills.

Shall we?

First time here?

2nd Order Thinkers is weekly deep dive into how AI disrupts daily life and alters human thought, often in ways you don’t expect.

I’m Jing Hu: trained as a scientist (8 years in research labs), spent a decade building software, and now I translate the latest AI × human studies into plain English, for someone smart and busy like you. 

If this sounds like your jam, hit subscribe or say hi on LinkedIn.

You get premium insights at grocery store prices. Subscribe before I realize what I'm doing. 👇


Unveiling the Paradox Across Six Studies

What is AI literacy anyway?

The researchers define AI literacy as objective knowledge about AI. They were measured by two quizzes (if you’re interested in testing your AI knowledge, use the links attached).

  • a 25-item quiz, broader practical scope including regulatory compliance, privacy, and real-world application scenarios,

  • and a 17-item quiz, more on fundamental AI concepts and technical understanding.

AI receptivity is basically how open or excited you are about using AI products and services. It’s all about your willingness and interest.

Intuitively, you might expect that more knowledge leads to more willingness to adopt AI, after all, if you understand it, you’d trust it, right?

As you suspected, in four pilot surveys, most people (including 36 tech executives) predicted exactly that; they assumed higher AI literacy would equal higher AI usage.

Though data proved them wrong.

This report covers a total of six studies to be as objective as possible to find out the relationship between AI literacy and AI receptivity.

Study 1. Which country has the highest AI literacy?

The team analyzed data from 27 countries.

They found that the more AI talent a country has, the less its people actually want AI in their daily lives. There’s a strong negative correlation between a country’s AI literacy and its openness to AI. When they dropped the US data (which is an outlier with much higher AI literacy), the pattern got even stronger.

Two ends of the spectrum:

  • US and the UK lead in AI skills, but their citizens are some of the most skeptical about AI changing their lives for the better.

  • Meanwhile, countries like India, Saudi, and China have much less AI expertise (compared with the UK and US), but their people are way more receptive, often convinced AI will “profoundly change” their lives or make things easier.

What about wealth and education? The study controlled for the Human Development Index (HDI), life expectancy, education, and income, and they found that AI Enthusiasm Paradox still held up.

Study 2. Who’s Using AI to Write Their Essays?
The students most eager to use AI like ChatGPT for their homework are NOT the ones who know the most about AI.

The researchers set up a simple test. 200+ undergrads took the AI literacy quiz 25 multiple-choice questions covering everything from coding basics to ethical dilemmas. Then, they were shown an AI-generated essay and asked, “Would you use a tool like this to write your assignments?

Turns out, students with lower AI literacy scores were the MOST likely to use AI for their essays, sometimes even to write the whole thing for them.

In plain English, for every 1-point increase in AI literacy score, willingness to use AI decreased by 0.05 points. So the more a student knows about AI, the less likely they are to let it write their work.

The effect was consistent and statistically robust across studies, with knowledge gaps of just a few points creating a reliable difference in AI receptivity.

Study 3. Who Actually Uses AI the Most?

Let’s bust another myth.

The most frequent AI users aren’t tech savvies; they’re the ones with the biggest knowledge gaps.

In Study 3, the researchers surveyed 401 adults and measured:

  1. AI literacy: A 17-question quiz on core AI concepts.

  2. Actual AI usage: How often have you used AI in the past six months? (Image generators, note-takers, writing assistants, etc.)

They found that Lower AI literacy = More frequent use of AI tools, and more:

  • People with higher AI literacy actually scored higher on “tech readiness”, they like tech more, but use AI less.

  • Higher AI literacy = more independence. Yet, it’s the less knowledgeable who hand their tasks to AI.

  • But note, having low AI literacy only means you know less about how AI works, not that you’re unintelligent or poorly educated.

In short, people who know less about AI are using it more; this is NOT relevant to their age, income, or tech enthusiasm.

If you believe early adopters are the most informed, think again.

Study 4. Who Do You Trust, AI or Humans?

The research team wanted to know, “Are people who know less about AI more likely to want AI, instead of a person, making decisions for them even on important tasks?”

How did they test this?

  • 1,099 adults took a 25-question AI literacy quiz (same as earlier studies).

  • Then they were asked: “Given 26 different tasks, would you rather have a human or AI do this?”
    Tasks ranged from predicting joke funniness to hiring employees to recommending romantic partners.

What did they find?

  • Lower AI literacy = Higher preference for AI. For every 1-point drop in AI literacy, preference for AI went up by 0.05 points (B = -0.05, p < .001).

  • This effect held steady even when people did the quiz before or after giving their AI/human task preferences.

But maybe some tasks just should be automated?

So the researchers rated each task as more “objective” (numbers, facts, right/wrong answers) or “subjective” (opinions, gut instinct).

While all participants preferred AI for more “objective” tasks like crunching numbers, people with lower AI literacy were more likely to say, “let the algorithm decide” for subjective matters. This included deeply personal, subjective, or even consequential tasks, like choosing a romantic partner.

The overall message was consistent: the “less-knowing” group favored the algorithm more often, particularly when the task had a bit of human flavor.

By now, you might wonder: Why on Earth would knowing less make you use something more? The last two studies dig into that question.

Study 5&6: What Turns People Into AI True Believers?

Why do the loudest AI cheerleaders know the least?

You’d expect technical know-how to breed enthusiasm. Turns out, the real fuel for AI fervor is a sense of magic.

In two back-to-back studies, researchers looked into the psychology of AI acceptance.

Participants again took an AI literacy test, then rated whether they’d prefer AI or humans for a set of tasks. This time, the researchers also measured something new: Do you see AI as “magical”?

They had people agree or disagree with statements like “Artificial intelligence seems magical,” “AI conjures answers out of thin air,” “AI feels like wizardry,” and “If I could see how AI works, it would look like a magic show”.

They combined these into a “perception of AI as magical” index.

The results are consistent with the previous studies, lower AI literacy meant higher AI receptivity.

People were more willing to hand tasks to AI than to humans. And critically, lower literacy also meant seeing AI as more magical.

This reminded me of this quote:

Any sufficiently advanced technology is indistinguishable from magic. — Arthur C. Clarke, author of the Space Odyssey series.

And those who aced the AI quiz were far less likely to regard AI as some mystical wizard; they saw the data and wires, not a rabbit out of a hat.

So that sense of magic explained the enthusiasm gap.

Statistical mediation tests showed that the “low knowledge → high AI enthusiasm” link was largely driven by differences in magical perception.

Those with meager AI knowledge tend to be awed and enchanted by AI’s seemingly uncanny abilities, and those feelings of awe fuel their willingness to use it.

When the researchers ran the numbers, the entire indirect effect of lower literacy on greater AI receptivity (through seeing AI as magical) was significant. In other words, if you account for the “AI feels like magic” factor, the enthusiasm difference mostly disappears. The awe factor is the secret sauce.

Finally, Study 6 with another ~1,000 people (via Prolific Academic) bolstered this conclusion and added a twist.

The team added even more layers: Did people feel awe around AI? Did they fear it?

You might suspect that people who know less about AI are just naively optimistic, perhaps they overestimate AI’s capabilities, or simply aren’t aware of the risks.

The studies put that assumption to rest. Lack of knowledge did not mean lack of concern or perspective. In fact, the less AI-literate folks were, if anything, more wary of AI’s limits and dangers in certain ways.

In one sample, the less knowledgeable even rated AI less ethical than the experts did. And the AI novices actually reported more fear about AI’s potential impact on humanity than the savvy group did. They were worried that AI could be dangerous or harmful, rather than a blind “AI is benign” attitude.

So if low-literacy individuals don’t see AI as more capable or ethical, and they’re more afraid it could go wrong, why on earth would they still be more receptive to using it?

They found, the less AI literate are drawn to the spectacle of the technology even though they vaguely sense the “magician” could be dangerous.

In the end, the single best predictor was that jaw-dropping wonder.

This also explains why the enthusiasm gap was largest for AI doing human-like tasks.

When AI steps into roles like creative writing, counseling, or art, we normally think it requires a “human spark”, while a novice user is blown away by the apparent sorcery, a seasoned AI user, by contrast, sees behind the magic.

We know the limitations, the training data, and the algorithms. We might appreciate the tool, but we aren’t awed by it.

Worth noting that when it comes to mundane tasks (say, sorting numbers or transcribing text), nobody sees much “magic”; it’s just automation. So in those cases, since awe isn’t a factor, people rely on more practical assessments (like efficiency and accuracy).

In this case, someone with AI knowledge might actually be more receptive because they confidently know the AI will excel at a boring task, whereas the less knowledgeable person doesn’t perceive any wow-factor and might just stick to manual methods.


Tech Literacy ≠ Learn Coding

The AI Enthusiasm Paradox raises an important dilemma for society.

We want people to understand AI better because informed users make better choices and are less likely to be misled.

True tech literacy isn’t about turning everyone into an AI engineer.

But cultivating the ability to look behind the curtain of technology.

That means

  1. understanding the structure of how AI systems make decisions (so they’re not perceived as infallible or supernatural),

  2. recognizing the risks and limitations (rather than simply marveling at flashy outputs),

  3. and emphasizing your responsibility in how to use or deploy.

You don’t have to know how to write Python code or build a neural network from scratch to grasp these things.

It’s more important to know, for instance, that an AI art generator was trained on thousands of images (perhaps raising copyright or bias issues), or that a chatbot doesn’t “think” but predicts patterns (so it could also confidently spout nonsense).

This kind of literacy helps you see AI as man-made, fallible, and shaped by design choices, not magic, but a tool created by humans with all our imperfections.

Uncritical awe is dangerous.

History shows that when people treat a technology as magical or beyond understanding, they might overtrust it or misuse it. Think dotcom bubble, or the early days of electricity.

“was like getting a sudden vision of Heaven.” — Erik Larson described it when the Chicago World Fair was illuminated using Nikola Tesla’s neon fluorescent lighting powered by his A/C system.

In an age of algorithms, I see the real magic lies in humans. Our ability to think critically and uphold our values even as we use powerful new tools.

To stay human and achieve some level of AI literacy:

1. Always ask How does it work?

2. Focus on the “problem architect” at hand rather than the solution.

3. Explore and define the boundaries of technology.

Stay human.

Discussion about this episode

User's avatar