It’s 2035.
You wake up to an AI assistant that has already curated your news, is going to meetings using your avatar, and has all your favorite snacks and entertainment in front of you. Life feels effortless, maybe a little too effortless.
By then, AI would be in our lives in such a way more impactful than electricity, that doesn't just change our lifestyle, but reshape who we are.
“Being Human in 2035” comes from the team at Elon University, Janna Anderson and Lee Rainie, who’ve spent two decades obsessing over how technology reshapes humanity itself. If anyone’s qualified to dive into these questions, it’s them.
For this report, they gathered 301 thought leaders from across 14+ industries, handpicking voices from government, tech, media, and beyond.
As the authors’ opening: An overwhelming majority of those who wrote essays focused their remarks on the potential problems they foresee. While they said the use of AI will be a boon to society in many important – and even vital –regards, most are worried about what they consider to be the fragile future of some foundational and unique traits.
Among 61% of the respondents, the change will be deep and meaningful or fundamental and revolutionary.
In this report, they found half of the 300+ experts believe that AI’s effect on the “essence” of being human may have a drawback for every benefit. About 23% mostly foresee a net negative impact.
While only 16% are mostly optimistic that AI will change humanity for the better.
Then the experts were asked to give an opinion on how living with AI will change 15 core traits of who we are — how we think, feel, and act — by 2035, compared to the world before advanced AI.
Questions asked such as,
How does AI shift our ability to relate, empathize, and build social bonds?
How does AI change our ability to trust information, others, and shared norms?
How does AI affect our thinking, learning, and cognitive independence?
How does AI affect our autonomy, motivation, and sense of meaning?
I broke their insights into three big buckets: mind, brain, and purpose.
Last week, our focus was on how human emotions and connections might shift in 2035, read part 1, Alone, Together.
This is Part 2, Brains on Autopilot and Where Do You Stand in the AI Hierarchy?
Your Brain on Autopilot
With AI sidekicks handling everything from drafting your emails to planning your entertainment and meals, it may boost your productivity in some ways.
Why bother since you don't need to lift a finger, look for anything for yourself? Your digital assistant will just ping someone else’s AI, and the work gets done faster and, frankly, better than you could manage.
Try to beat it and you’ll probably just end up bruising your ego.
Until you have a sudden wake-up call and realize you haven’t done any serious thinking in months…
The convenience trap we’re falling into has slowly been set up.
This report highlights five distinct traits of the human brain that are susceptible to AI influence: deep thinking, reflection, decision-making, confidence, and curiosity.
Rather than seeing these traits as isolated, think of them as steps on a spiral staircase, each one lifting you higher. Deep thinking sparks reflection, which shapes your decisions, fuels your confidence, and keeps your curiosity burning. But like any staircase, it works both ways: take the wrong turn, and you can just as easily spiral down.
These steps don’t stand alone when you start using AI. The way AI nudges your thinking can either turn this spiral into a launchpad… or a slide.
Of course, everyone wants to take the right turn, and no one wants their brain to deteriorate.
So, at the end of this article, I will reveal the real question: how to avoid it?
Shall we?
How Will AI Affect Our Deep Thinking About Complex Concepts?
Half of the experts believe that our capacity for deep, complex thought will deteriorate in the next decade.
Aligns with my reports on several academic findings. Read Why Thinking Hurts After Using AI? and Seduced by AI’s Convenience.
We talked about how AI can act like a mental “shortcut,” and we’ll gladly take it.
As psychologist Russell Poldrack points out, the ability of AI systems to perform increasingly powerful reasoning tasks will make it easy for most humans to avoid having to think hard… the urge to think critically will continue to dwindle, particularly as it becomes increasingly harder to find critical sources in a world in which much internet content is AI-generated.
Why puzzle through a hard problem when a machine hands you an answer?
Will AI Stop Us From Self-Reflection (Metacognition)?
A positive note is that AI could serve as a mirror to help people understand their own cognitive biases and thinking patterns.
However, if you’re not even aware of how/why you came to an answer, you lose the ability to reflect.
Over a third of experts warn our “metacognition” might stall out as AI takes the wheel. When a system constantly tells you where to turn, you stop checking the map. You stop even wondering why you’re headed that way.
Dave Edwards, co-founder of the Artificiality Institute, noted the emergence of the ‘knowledge-ome’. An ecosystem where human and machine intelligence coexist and co-evolve – transforms not just how we access information, but how we create understanding itself. AI systems reveal patterns and possibilities beyond human perception, expanding our collective intelligence while potentially diminishing our role in meaning-making… we risk losing touch with the human-scale understanding that gives knowledge its context and value.
Knowledge becomes a readout rather than the story of your own.
Will AI Impact Our Decision-Making And Problem-Solving Ability?
Some experts spot a silver lining here. About 40% believe AI will mainly boost human problem-solving by 2035, while 30% see the opposite.
This one makes me pause.
The real decision-making and problem-solving that matter come from deep thinking and trial and error, no? There aren’t many shortcuts to a good decision.
Yet if most other cognitive skills in this report are trending down, it makes me wonder what “better problem-solving” really means. What decisions would you actually make when deferring to a system that is just one click away?
Slowly, you become the passenger (in theory).
I get it, as for now, AI can’t actually make decisions. I talked about the reasons in "AI Agents Problems No One Talks About." Today’s models crunch stats, not meaning.
But we're talking about ten years from now.
I am curious, are those 40% of experts betting that AI will still be stuck at statistical guesswork? Or should we prepare for something much stranger, an AI that can even make decisions w/o our guidance?
Laura Montoya, founder of Accel AI Institute, commented: The boundary between human and machine may blur as AI becomes more integrated into human decision-making. AI-driven assistants and advisors could influence our choices, subtly reshaping how we think and act. While this partnership may lead to more efficiencies, it risks diminishing human agency if individuals begin to defer critical thinking to algorithms.
Will AI Sap Our Confidence In Our Ability?
When you solve enough tough problems on your own, confidence becomes an evidence-backed trust in your judgment (part of your self-esteem), as a lagging indicator; you earn it after the struggle.
But what does it do to your self-confidence when a machine always knows best?
A few experts suggested that humans can gain knowledge and have uplifting experiences through AI systems that build their confidence in their native abilities and understanding of the world, just as humans acquire such wisdom from other humans.
Whereas, 48% of the experts believe humans’ trust in their own abilities will slowly erode.
Being in the field for 30 years, Computer scientist Eric Saund said, Human competence will atrophy… To play serious roles in life and society, AIs cannot be values-neutral. They will sometimes apparently act cooperatively on our behalf, but at other times, by design, they will act in opposition to people individually and group-wise. AI-brokered demands will not only dominate in any contest with mere humans, but oftentimes, persuade us into submission that they're right after all.
I can’t help but wonder… as we start losing trust in our own ability, does that just push us to rely on AI even more? The less we back ourselves, the more tempting it is to let a trusted entity make decisions for us.
What About Curiosity and Creativity in 2035?
Experts seem torn.
On one hand, 42% expect AI to boost our curiosity and creativity by 2035.
An inventor, Tom Wolzien, captures this possibility: Today‘s AI … allows me to expand my curiosity. I can ask it, ‘What about this? Explain that in terms I can understand,’ and so on. I‘m not a scientist nor am I an environmentalist, but AI can help me understand the damaging significance of methane when compared with CO2. It can visualize the size of the block of carbon produced as a result of a flight I take across the country or around the world…
On the other hand, the same survey also found,
Half of the experts predict we will lose our capacity for deep thinking,
40% believe our problem-solving skills will decline,
and 48% fear our confidence will erode.
I’m sure you see the paradox now.
An AI-fueled creativity renaissance coexists with a decline in concentrated thought. It’s as if we’re promised a steeper trail (more curiosity) even as our lungs (attention spans) are being short-circuited.
The idea that AI will somehow supercharge curiosity and creativity for everyone just doesn’t add up. Remember my report, Is Brainstorming With AI REALLY A Good Idea?
Here’s what those studies show:
AI boosts individual (especially those with zero experience in an industry) originality, but group diversity tanks, and everyone ends up thinking inside the same new box.
The more people lean on AI, the more similar (not more creative) their work becomes.
If not careful, brainstorming with AI = comfort zone on steroids. You get more ideas, but they’re all coming from the same algorithmic gene pool.
Even with the latest AI agents, humans still excel in creative and diverse ideas
As journalist Maggie Jackson insightfully points out:
curiosity is born of a capacity to tolerate the stress of the unknown, i.e., to ask difficult, discomfiting, potentially dissenting questions. Innovations and scientific discoveries emerge from knowledge-seeking that is brimming with dead ends, detours and missteps. Complex problem-solving is little correlated with intelligence; instead, it’s the product of slow-wrought, constructed thinking…
Remember, LLMs are like that friend who’s read every book in the library and can finish your sentences, but has never actually lived a day outside.
They don’t invent; they autocomplete.
Using whatever’s popular, safe, or statistically likely.
Ask them for something truly new, and you’ll mostly get a clever remix of old ideas, never a wild leap into the unknown. They have no gut instinct, no lived experience, no way to understand the unique weirdness of your problem, and they draw only from what’s already mainstream.
So if you’re hoping for a flash of genius, an “aha!” that breaks the mold, don’t expect it from a machine built to predict what’s most average.
Pamela Wisniewski, associate professor, said, If we use AI before we learn how to do those tasks ourselves, it will rob us of important scaffolding and the experience of learning by doing. For example, AI does an amazing job at synthesizing and summarizing existing text. However, if we don‘t teach our children the process of summarizing and synthesizing text for themselves, we rob them of a chance to deepen their ability to think critically.
The Poor and Rich in the AI World
In 2035, headspace is the ultimate luxury.
Only the rich can afford true silence, deep reflection, solve real problems, and be creative.
This isn’t fantasy.
The children of tech moguls routinely attend Waldorf or forest schools. These elites cultivate an offline lifestyle and encourage meaningful, real-world play time in their offspring.
Steve Jobs himself said, “They haven’t used it. We limit how much technology our kids use at home,” or Peter Thiel, PayPal co-founder, restricts his children to just 1.5 hours of screen time per week, or Snapchat CEO limits his stepson to just 90 minutes of screen time per week, inspired by his own parents who didn't let him watch TV growing up… examples go on.
In comparison, American teenagers from lower-income households (<$35,000 annual household income) spend 9 hours and 19 minutes on their screens each day.
That’s more than 40 times the difference in screen time between the rich and the poor.
The divide starts in childhood. The difference only exacerbates further with GenAI. It also has a profound impact on adults, with a drive that extends beyond mere entertainment, in the name of productivity.
An outcome of a GDP-focused economy system that naturally demands more productivity from people with fewer resources. It’s the non-privileged like you and I who have to master AI just to keep up, to do more with less, and to squeeze a few extra hours out of a packed week.
For the wealthy, there’s no reason to wrestle with AI’s quirks and hallucinations when you can just hire a human assistant. £200 for ChatGPT or six figures for real help, it’s all pocket change, so why not choose comfort and reliability?
So, a kind of digital abstinence class system is more apparent than ever.
At the top, they are the Unplugged Billionaires with a human assistant to screen their calls and handle their emails. They can mostly do without a smartphone, their imaginations unsullied because they consume knowledge-rich content or engage in activities that fuel their human connections and creativity.
Next rung down, the Partially Connected. These are the folks with enough money and time to book an actual holiday, not just scroll past Instagram beach pics. With the basics covered, cash and a blocked calendar, they can afford digital detoxes and those rare, mind-cleansing getaways.
Then comes the Constantly Online. This group lives plugged into a drip-feed of free apps and endless content, their attention always on loan to the algorithm of the day. Those who rely on monthly paychecks and juggle everything with a bit of help from AI. We use tech to squeeze out a few spare moments, but how do we spend those scraps of freedom? Mindlessly doomscrolling through social feeds, half of it AI-generated, or venting to a chatbot “friend.”
In such a spectrum, your status isn’t so much defined by your job or address, but more by your ability to unplug.
The more time you spend plugged in, feeding your habits, choices, and moods into the AI pipeline, the more predictable and influenceable you become. For the constantly online, every swipe and scroll gives companies another data point to shape your preferences, your thinking, even your next decision.
In this AI-driven hierarchy, the less data companies have on you, the freer you are.
That doesn’t mean the rest of us are powerless. You don’t need a trust fund to carve out clarity. Even on a tight budget, with enough intention, you can claim back a bit of headspace.
Be mindful of what you share with AI and online.
Make meals sacred. Ban phones at dinner (even for a single day) and savor conversation or silence, just enjoy your food and the company.
Pick one day to go offline entirely, leave your phone at home. Enjoy nature, museums, fishing, and golfing, pick your liking.
Allow your mind to wander, take walks without music or podcasts. Cultivate boredom helps you appreciate the world, even the noise or traffic can be fun to watch.
The first and best victory is to conquer self — Plato
See you in Part 3. Agency and Purpose.
How AI destabilizes trust, agency, and the search for meaning.
Share this post