You might have seen the headlines: “Terrifying MIT Study Finds ChatGPT is Rotting Our Brains”,
or
Is ChatGPT making us dumb? MIT brain scans reveal alarming truth about AI’s impact on the human mind — The Economic Times
or social media posts that went viral, like this one:
All tributes to the recent MIT study, "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task,"
Judging by all the AI-generated summaries in the newsfeed, I doubt most of them actually read the full study before hyping it up.
This is irony so pure, it deserves its own trophy.
A study on AI making us intellectually lazy goes viral, because AI made it easy to summarize without reading the full study, because AI made it so easy to spread an idea w/o anyone bothering to verify?
Yes, 206 pages is a slog.
Well… that’s why you subscribed, I guess ;) Do give yourself a pad.
I have been reviewing studies on how AI will impact human behaviour. I admit, the result of this study isn’t really a surprise, but I love how this study gives the world a hammer that AI is not almighty.
It is a great opportunity for us, group-pragmatic, to be loud and be heard.
Yes, I know. I just made fun of the viral posts. I do see that they serve a critical role in helping the general population become more aware of how AI can affect behaviour and the brain.
My previous work, like "Why Thinking Hurts After Using AI?" and "The Silent Classroom”, are two great complementary pieces to read if you’re interested in how using AI altered our personal and social behaviour.
This study, however, offers an as-close-as humans can get view of the mechanisms at play. Short of cutting your brain in half when using ChatGPT… nope, it wouldn’t work; you’d be gone by then.
They aimed to find out the answers to these questions:
Do people write significantly different essays when using LLMs, search engines, and with their brains alone?
How does your brain activity differ when using LLMs, search, or their brain-only?
How does using LLM impact your memory?
Does LLM usage impact your perception of ownership of your work?
TL;DR
Tool choice reshapes writing output and thinking. Essays and brain activity differ a lot depending on whether students used ChatGPT, Google (search only), or just their own brains.
ChatGPT-assisted writers produced strikingly homogeneous essays with repeated phrasing. Their brain scans showed up to a 55% drop in connectivity relative to brain-only writers. Only 1 in 9 students who used AI can correctly requote their work.
Only half (9/18) of students who use ChatGPT for their essays feel like they fully own the work.
The essays written by students who wrote without tools had the most diverse ideas and vocabulary. They also formed the strongest memory traces, demonstrated accurate recall, and felt proud ownership over their writing.
Another listen learned, Human vs AI “teachers“ saw things differently. Teachers graded the AI-crafted essays harshly for being formulaic and unoriginal. An automated AI “judge,” however, gave those same essays inflated scores.
This study has sparked numerous debates. Some argue that it lacks sufficient evidence, while others praise it because the graphic of a brain scan has all the elements needed for a study to go viral in comparison to previous AI impact studies on critical thinking.
I plan to cover this debate but also include the previous studies that I found valuable, not least this MIT one.
Shall we?
Listen to this episode with a 7-day free trial
Subscribe to 2nd Order Thinkers to listen to this post and get 7 days of free access to the full post archives.