Two Students, One AI Tool, Two Completely Different Educations
Weekly AI Reboot: straight talk, smart ideas, stuff worth knowing—#46
I write about leadership, education and AI, with a focus on why critical thinking about technology matters more than ever.
Please hit the heart ❤️, restack 🔄, subscribe 📨, and all that jazz to help spread the word 🙌
Two students sit in the same classroom, open their AI apps, complete identical assignments, but walk away with entirely different educations. The teacher sees two finished essays that appear thoughtful, well structured, and responsive to the prompt. Nothing looks unusual.
But behind those two screens, what actually happened couldn’t be more different.
One student pasted the assignment prompt directly into the chat, skimmed the response, made a few edits so it didn’t look copied, and submitted. Efficient, effective, and done in a fraction of the time. The equivalent of choking down your broccoli to get dessert.
The other student started with a question about the concept behind the assignment. When the explanation came back, they asked for clarification on the part that still felt confusing, then requested another example, and challenged the response. The conversation went through several exchanges before they ever started writing.
Both submitted something polished, but only one of them actually wrestled with the ideas.
And here is where the OECD’s Digital Education Outlook 2026 gets very uncomfortable.
Students using AI improved their assignment performance by up to 48%. That sounds like a remarkable educational win. But when AI access was removed, those same students performed 17% worse than before they started using the tool. Unfortunately that’s not a learning gain but a dependency that looks like progress until the exam arrives.
AI raised the quality of the output without strengthening the thinking behind it. The very definition of cognitive outsourcing we’ve all been hearing so much about.
Haven’t we seen this before?
This isn’t unique to AI and I experienced this many times during the course of my career. People resisted moving to new systems, even something as simple as adding dual monitors, was a challenge. The current conversation about AI in education has a similar shape.
Much of the debate is still focused on whether students should use AI at all, or how schools can restrict it.
But the OECD report points at a reality that institutions are still reluctant to admit: students are already using these tools outside the classroom, on their phones in the evening, in their dorm rooms while writing papers, and at the library before exams. The most significant AI interactions are already happening in spaces where classroom policy has no reach.
Researchers are calling it shadow pedagogy. It’s the learning that happens outside formal instruction, outside the curriculum, and outside any institutional visibility. I’ve written about this recently, and the OECD data makes it impossible to ignore any longer.
The real divide
So the question is no longer whether students will encounter AI during their education. They will and they are. The immediate concern is whether anyone is teaching them how to engage with it in a way that builds something that doesn’t reverse as soon as they stop using it.
Students who learn to interrogate an answer, ask follow-up questions, test explanations, and refine responses get something close to a tutor available around the clock. But those who merely accept the first output and move on get something closer to a very fast homework machine.
One student gets done faster. The other gets better at thinking.
That’s exactly what my AI literacy program for students and parents is built around. The students pulling ahead in the AI race right now are the ones who know how to fight with the technology a little bit. They question it, push back on it, and make it work a whole lot harder.
That’s a learnable and teachable skill. If no one’s teaching your child how to do it, that’s a problem worth solving.
Tech Toolbox: Tools I’m Loving Right Now 🛠
My favorite tech tool this month: Anthropic
I have to admit, Claude has never been my go-to. For a long time it sat in a distant third behind ChatGPT and Gemini. But something recently shifted.
It passed Gemini months ago. And with the latest 5.2 update on ChatGPT, Claude is edging past that too. Part of it’s the model itself. The other part of it is the company behind it.
Anthropic refused to loosen safety guardrails even when it cost them government contracts. They built the Claude constitution long before AI ethics became a marketing line. And they’ve stayed consistent about why those choices remain important, even when it’s been inconvenient.
I find myself looking at Anthropic in an entirely different way, and not just because the model is good, but because the values underneath it actually show up in the product.
Short version: Claude isn’t in third place anymore.
The best dictation tool I’ve used ⤵
Wispr Flow Referral Link: wisprflow.ai/r/WISPR6911.
(You get a $15 credit once you hit 2,000 words! Trust me when I tell you that will happen quickly once you get hooked on this thing).
In Case You Missed It! 🔙
👉 My previous posts 📝to check out:
➠ Weekly Reboot: Putting Her in Her Place: Did AI Write This?
➠ Why Some People Struggle With AI—You’re Prompting Like a Rule-Follower
➠ Three AIs Walk Into a Bar and Disagree— Why One Model Isn’t Enough
📸 check out my Instagram 😊
And until next week, “Don’t forget to lead with purpose in everything you do.”
© 2026 Bette A. Ludwig: All rights reserved
For those who don’t want a regular paid subscription, I’m also adding a simple tip jar. A one-time way to support the work if it’s been useful to you and you’d like to contribute in that way.
However you read, thank you for being here 🙌




