The Startling ChatGPT Confession at Graduation That Shocked Viewers
Here's the powerful shift students need to succeed with AI

Welcome to Manic AI Monday 🦾 Each week, I’m unpacking something strange, surprising, or quietly brilliant from the world of AI and how it’s reshaping the way we work, think, and occasionally spiral.
Get ready for a fast-paced start to your week with unexpected use cases, hot takes, and the kind of stuff that’ll have you saying, “Wait... people are actually doing that with ChatGPT?”
So I came across an article recently. The first one was in the New York Post with the headline: UCLA grad brazenly shows off ChatGPT that did his assignments for him, and critics aren’t happy: ‘We’re so cooked.’
And yeah, I clicked.
There was also a version on MSN that framed it a little differently, but still leaned clickbait: University of California graduate goes viral after revealing they used ChatGPT to graduate. Slightly less dramatic, but still missing the full picture.
It was about a student at a University of California graduation who flashed his laptop screen on the Jumbotron with ChatGPT open. Scrolling loud and proud as the entire stadium went wild.
The rest of the internet, on the other hand, flipped out in a different way. “Give the diploma to ChatGPT.” “This is what education has come to.”
But if you actually watch the video or read the student’s LinkedIn post, you get the real story. He wasn’t bragging. He still had final exams due at 5 p.m. on the day of graduation. He was literally working on them during the ceremony that started at 3 p.m.
He was using ChatGPT to help him finish. His professors had encouraged them to use it. But people didn’t care about that part. They saw a screenshot and jumped straight to AI education scandal. Fake diploma. Lazy student. Everyone’s doomed.
If you ask me, the real question here is why any professor would give juniors and seniors a final due at 5 p.m. on graduation day. That’s what people should be asking, not calling out students. But I digress.
Meanwhile, in my home office
I did something that looked pretty similar but not quite the same. I finally tackled this massive mess I had on my computer. My Google Docs, desktop folders, pictures, diagnostic tools as lead magnets, social media stuff, and Substack essays were ALL over the place.
It was like my computer had procreated without my consent. It was my own digital chaos, and I was drowning.
It took me two weekends, Obsidian, and around 20 hours total to organize it all. And I used ChatGPT the entire way through. Not to do it for me. To help me do the thing I would have never attempted without it.

Years in the making, my system had degraded to the equivalent of finding junk in a drawer. I’d had enough. I never tackled it before because I felt completely and totally overwhelmed.
I kept asking questions like:
Can I link Obsidian to Google Docs?
Is there a way to make Google Docs update automatically if I make edits in Obsidian?
How do I download Docs to my desktop without doing it one by one?
What’s the fastest way to back up all my photos using iCloud, which I’ve literally been paying 99¢ a month for and never using properly?
Sometimes the answers were off, so I’d ask again. Sometimes it gave me something useful right away. Other times, I had to combine two suggestions to get what I needed. But I was doing the work the whole time.
It was like having a very technical friend who never got tired of me saying, “Okay, but what if I want to do it this way?” or “Wait, how do I do that again?”
Not the same thing
Now, I know someone out there is already lining up to say: well, would you want a surgeon who learned from ChatGPT? No, of course not.
But I also wouldn’t want one who never used modern tools to think faster, track better, or double-check themselves. Using AI well isn’t the same as outsourcing thinking, judgement, and ethical decision making.
Some people argue it’s just a big search engine, and it certainly can function that way. I’ve had to search Google and online threads to help me find solutions to tech problems in the past. It was a nightmare. You could burn an hour or two just finding a simple fix.
Most of the time you would stumble upon things that didn’t work or those you’d already tried.
Let’s call it what it is
I mean, I was debugging and linking systems and asking it to write scripts. That’s coding, right? Vibe coding, but still. The end result is I now have this powerhouse database, synced, searchable, shareable, that holds all of my work.
Most importantly, I can actually use it.
I even found stuff I had totally forgotten I’d created. Would I have done any of this without AI? Honestly, no. I would’ve stayed overwhelmed. Kept slogging through and managing it the best I could. That’s the reality of ADHD and disorganized systems. Sometimes the problem isn’t effort but access.
What this tech is really for
This is the kind of thing I advocate for: Using it to make things possible that I wouldn’t have been able to do on my own by using the Socratic ChatGPT method - asking it questions and lots of them. But more importantly know when to say, no absolutely not that won’t work or can we try this instead.
Eventually you decide what and how to use the output. So no, ChatGPT doesn’t do it for you. But it becomes your trusted copilot the entire way through.
This is why we have to teach it
And this ties into something bigger. Because here’s what I keep thinking: Unless we start teaching students, starting in high school, definitely in college, how to use this in a way that’s collaborative, that strengthens their thinking, they’re going to take the easiest route possible.
People worry about becoming reliant on AI, but reliance on tools is nothing new. We depend on the internet, email, cars, WiFi, electricity, etc. and those dependencies have become essential to how we live and work. The goal is learning to use these tools thoughtfully and skillfully so they expand our capabilities instead of limiting us.
If we don’t, students will be searching TikTok and Reddit on how to use ChatGPT to do their homework and how to get it to write multiple-choice questions for them to memorize.
That's not using or engaging with it. That's skipping important cognitive processes. There's no connection or creation of ideas. They're not learning how to think with it.
Teaching them how to prompt, whether it’s one dimensional, two dimensional, or prompt stacking, is absolutely necessary. But if that’s all they take away from it, they’re missing the entire point. It’s essentially the equivalent of asking yes or no questions, and that isn’t a true partnership.
Imagine if I had gone with a more surface-level conversation with ChatGPT about organizing my files and folders. Sure, I would have ended up with a system. But it would have been one I couldn’t actually use, and I’d be right back where I started.
If all you're doing is putting in a prompt, getting a response, and then adding one more on top of that, it’s not iterative. Sure, it's interaction, but it's limited.
That’s exactly why we need to teach them how to use the tool so they learn to refine and think critically about the suggestions it offers.
Bring back Blue Books? Seriously?
I’ve seen people calling for a return to Blue Books in higher education. You remember those? The cheap little booklets we had to buy and feverishly write our essays in during finals or tests?
There are faculty who want to ban laptops, ban AI, bring everything back into the classroom. And if the goal is to go back to 1980? Sure. That’ll work. But if you’re trying to prepare students for 2025 and beyond? You’re doing them a massive disservice.
You’re not teaching them how to work with the tech. You’re just hoping it disappears.
This is on us
Students aren’t going to learn this on their own. And just like I did, they're going to have to be willing to sit with it, spend hours with it, work back and forth with it. Not take the shortcut of just asking it to tell them what to do.
I think we can all agree we don't want a generation like that.
But banning it, fearing it, or creating vague policies isn't going to solve this. And if we don't teach them the foundations and the fundamentals, the temptation to use it the easiest way possible will be there the moment they start.
They’ll default to whatever gets the assignment done fastest. It’s up to us, educators, parents, anyone who understands this technology, to teach the real difference between using AI as a shortcut and using it to think better. Because they’re not the same thing.
We can’t keep mistaking the tool for the problem. It’s here, and it’s here to stay. We’re not “cooked.” We’re just not teaching the right skills yet.
© 2025 Bette A. Ludwig: All rights reserved
👉 Don’t Forget to Evaluate Your Leadership Approach with This FREE Assessment
If this post gave you something to think about, please tap the ❤️ and share it with your network. It helps more people see these ideas and keeps me creating. Thanks so much for your support! 🙌
Such a great take, Bette. Loved how you flipped the narrative—this is about learning how to think better with the tools we’ve got.
It's high time we define what kind of competencies we want people to develop in different professional sectors given that AI is going nowhere and neither it should. Right now we're playing ostrich and it isn't helping anyone. Thank you for sharing your learning process with ChatGPT