ChatGPT Wrote Their Essays — And Now We’re Surprised No One Learned… Seriously?
Popular study says AI makes you dumb — really? Come on

You’ve probably seen it by now. The MIT study “proving” that using ChatGPT makes people dumber. It’s all over headlines and LinkedIn feeds. Thought leaders waving around screenshots. People smugly reposting it with captions like “I knew it!” or “Told you so!”
Well, I actually read the study. Not the full 100 pages. But all the parts that count: the introduction, abstract, study design, discussion, and conclusion. And I skimmed the rest.
I wanted to know what they actually tested. And followed the logic carefully.
So let me tell you what they are getting wrong because it’s not saying what most people think it is.
Real-World Use? Not Quite
The researchers split participants into three groups. Brain-only, Search Engine, and LLM. The Brain-only group had to write their essays with no tools. The Search Engine group could use Google. The LLM group got access to ChatGPT.
Simple, right? Not quite.
Here is what everyone is missing. These weren’t seasoned ChatGPT users. They weren’t prompt engineers. They weren’t even regular users. Most had never even used it before.
Let me write that again because it’s worth repeating. The majority of participants had never used ChatGPT before this study.
“For the LLM group, we asked if participants have used LLMs before… with the most significant cluster showing no previous use.” (direct quote from page 55 of the study).
They didn’t know how to talk to it. They didn’t know how to shape the output. And most importantly, they didn’t know how to iterate. No follow-up. No back-and-forth. No pushing for clarity.
So what happened?
Well, first they had to figure out how to even use it and fast. Once they did, they fumbled through, got some kind of answer, then pasted in their essay. Since that’s what the LLM group did, of course their brains were less engaged. They were more irritated and annoyed. It makes sense their memory and retention were worse.
According to the study itself, participants quickly grew frustrated with the tool, called it unreliable, and turned to Google or their own knowledge instead. They mostly used ChatGPT for small tasks like summarizing prompts to decide what to write about.
They found prompt-writing took effort and limited the AI’s usefulness.
Some even imposed strict word limits just to keep the output manageable. ChatGPT helped clean up grammar and structure but didn’t add much creativity or help express ideas the way participants wanted. Time pressure sometimes forced people back to it, even though many felt guilty with some even calling it cheating.
(This section paraphrased from page 38 of the study)
They didn’t use the tool. They offloaded to it. Try to remember what it was like when you first started using ChatGPT.
The Brain-only group didn’t “beat” AI
In later sessions, the study did something interesting. They flipped the groups.
People who had written without tools (Brain-only) were allowed to use ChatGPT. And suddenly their performance went up.
Memory stayed strong.
Semantic recall was higher.
Brain connectivity lit up.
The researchers noted more top-down processing and deeper integration.
Why?
Because those participants had already structured their ideas. They had mental scaffolding. When they turned to ChatGPT, they didn’t ask “write this for me.” They already had their foundation to work from. So they likely asked it to refine it, clarify it, and enhance the writing.
And of course it worked. The tool amplified their cognition instead of replacing it.
The search engine group sat in the middle. Their outcomes were somewhere in between. Not as strong as Brain-only. But stronger than the LLM group. That makes sense. Using Google still requires you to actively participate and integrate information. You have to read, evaluate, and decide what to include. You’re still thinking, even while retrieving.
It’s a messier kind of help. But it still keeps you in the driver’s seat.
What this study actually shows
If you hand someone a powerful tool before they’ve formed a thought, it will do the thinking for them. It’s no different than someone standing over you saying, “Click here, type that, drag this, save it there.” You’re not actually learning. You’re just following directions.
And when they’re gone? You won’t remember a thing.
The only way to really learn is to do it yourself. Find the information. Click the wrong thing. Figure out where you went off track. That’s how it sticks.
But if you give them the tool after they’ve already engaged their brain? Then it becomes an asset. It lets them go deeper to explore more, tighten structure, and add clarity.
That’s exactly what the study found. Unfortunately, most people didn’t read it that way.
Artificial Use of AI
Let me give you a contrast.
I didn’t pore over all the raw findings or charts, but I did ask ChatGPT to help me make sense of the methodology and group design. I wanted to understand what the study really shows. And what it doesn’t.
I wasn’t offloading cognition. I was increasing it because you know what happened? I retained more. I understood more. I remember more.
Not just because the tool gave me answers. But because I stayed in the loop the whole time. This kind of active collaboration is what the study didn’t test.
It didn’t include participants who iterate with ChatGPT. Who revise prompts. Who challenge its responses. Who say “that’s not what I meant, try again.” Or “explain it to me like I’m five.” Or “compare that with the article I linked earlier.”
It didn’t test the kind of AI use that requires metacognition. And that’s a massive oversight.
If you’re using ChatGPT to avoid thinking, yeah, your brain will thank you for the nap. But if you use it after you’ve thought through your ideas? If you use it to question, expand, and refine? You’ll actually retain more.
That matches what we know from learning science. AI isn’t a shortcut to wisdom. But it can help show you what you might be missing. Especially if you stay in the loop.
Stop blaming the tool
I remember when I first learned to ride a two-wheel bike. It’s not something you just know how to do. Most kids fall more than once. We had a little hill in our backyard, and one day my dad gave me a push. After a few tries and a few scrapes I finally got it.
But no one saw me fall and said, “That’s it. Put the bike away. It’s ruining her coordination.”
Same thing with music. I started playing the flute in sixth grade. “Mary Had a Little Lamb” was the first thing I ever learned. And if you’ve ever been to a sixth grade concert, you know you basically need earplugs to survive all the wrong notes.
But no one walks out of that auditorium and says, “Welp, tell those kids to give up. Instruments clearly aren’t their thing.”
We give kids space to learn the hard stuff. We expect some awkwardness in the beginning. We recognize the mistakes as part of the process.
But we’re doing exactly that with AI right now.
This study tested what happens when you give a complex tool to people who’ve never used it, and tell them to perform a nuanced writing task. It’s not surprising that didn’t go well.
But that’s not a reason to dismiss the tool. It’s a reason to teach better use.
A more accurate headline would be this
When people use ChatGPT to avoid thinking, they don’t learn as much. That’s the real takeaway here. And, sorry MIT, it’s not groundbreaking or new.
We’ve known forever that learning takes effort. How much just depends on your skills, prior knowledge, and a bunch of other factors I won’t get into. If something feels too easy, it probably is.
Yes, using ChatGPT without thinking, understanding, or any real curiosity probably won’t lead to much learning. And if that’s the only way you use it, it can absolutely become a crutch. But that’s not the tool’s fault.
According to the article, the best way to use ChatGPT is once you've already formulated your thoughts. Once you have solid working knowledge and wrestled with the ideas then you can help it sharpen your output.
I completely disagree with that. Sometimes you get the most out of it while you’re still in the middle of it—thinking through it, talking it out, and trying to connect the dots.
When a chatbot teaches you HVAC
My refrigerator recently stopped cooling properly. The fridge was warm, and the freezer was cold but not cold enough. I had no clue what was going on, so I started talking it through with ChatGPT.
I gave it details, asked questions, and kept circling the issue. At first, the diagnosis was excessive ambient heat. My house was sweltering. ChatGPT suggested adding water bottles to both compartments to help stabilize the temperature since I didn’t have much food in either.
And that did help. But I kept pressing. At one point, I mentioned that the freezer fan used to be noisy and was now quiet but barely blowing any air. Based on that, ChatGPT suggested it might be the evaporator fan.
So I kept digging. I gave it the model number. I asked how airflow works, how the system is supposed to function, and what typically goes wrong. After some back-and-forth and deep Googling for discontinued parts, I found a compatible aftermarket fan, ordered it for $20, and swapped it out myself.
Now the fridge is working beautifully. No service call. No $150 repair bill. Just me, a screwdriver, and a very patient chatbot.
If I had stopped at that first answer, I would have missed it. ChatGPT would have written it off as a temperature problem and probably suggested turning the AC up. And none of that would have fixed the real issue. The fan was failing.
Because I kept going, kept asking, kept adding context, I figured it out. And I actually fixed it.
Using it to think through the problem with me, step by step, that’s what made the difference. That’s where the real learning happened.
And yes, I now know way more about refrigerators than I ever thought I would.
We need nuance, not outrage
These findings are being misinterpreted by people who want easy narratives and LinkedIn clicks which it’s getting.
AI bad, make humans dumb.
How we use these tools matters more than if we use them. And if we want to use them well, we have to stop reacting to headlines and start actually reading.
Even if we don’t finish all 100 pages.
Before we rush to agree or disagree with a study, we need to understand what is actually being tested, how it is tested, and what the results truly mean. Not just use it for clickbait or fear-mongering.
Otherwise, we end up proving the study’s point but for all the wrong reasons. It's time to start looking past the hype and using AI with purpose and care.
© 2025 Bette A. Ludwig: All rights reserved
👉 Don’t Forget to Evaluate Your Leadership Approach with This FREE Assessment
If this post gave you something to think about, please tap the ❤️ and share it with your network. It helps more people see these ideas and keeps me creating. Thanks so much for your support! 🙌
I love that fridge story, Bette! How cool is that, literally? I've had success using it to test financial products. Not doing the thinking for me but checking my understanding of rules, scenarios and how things work.
This is the kind of clear, level-headed response we need more of. You cut through the noise with thoughtfulness, nuance, and lived experience, thank you for showing what it really means to learn with AI, not lean on it.