What If Your Teen Opens Up to a Bot, Not You?
Weekly AI Reboot: straight talk, smart ideas, stuff worth knowing—#21
I write about leadership and AI, with a focus on why critical thinking about technology matters more than ever.
Please hit the heart ❤️, restack 🔄, subscribe 📨, and all that jazz to help spread the word! 🙌
I once met with a student who, by every visible measure, was doing great. Honor student. Involved in clubs. Kind, respectful, present. We talked about his schedule, his classes. There was nothing in his tone or body language that suggested distress.
A few days later, I got the memo. He had committed suicide over the weekend.
Even with a counseling background, I hadn’t seen a single warning sign. I replayed that meeting a dozen times in my head. Nothing stood out. He seemed happy and looking forward to taking the classes we talked about. And to this day, I still replay that conversation and wonder if there was something I could have done.
Looks human, isn’t
This week, OpenAI and Meta both announced changes to how their chatbots respond to teenagers in distress. OpenAI says it will roll out parent-linked controls this fall. Parents will soon be able to link to their teen’s account, shut off features, and get alerts if the system detects distress.
Meta says it’s blocking its bots from discussing self-harm, eating disorders, or “inappropriate romantic conversations” with teens.
This all comes on the heels of a lawsuit: the parents of 16-year-old Adam Raine are suing OpenAI, alleging that ChatGPT helped their son plan and carry out his suicide. It’s heartbreaking. But it also raises a bigger question.
What happens when a teenager turns to a chatbot for help, and the only thing listening is a large language model?
Wired for response, not meaning
We’ve always missed signs. Even trained professionals do. Even well-meaning adults do. But now we’re outsourcing this to a predictive model trained on Reddit posts, and acting shocked when it doesn’t behave like a therapist.
We keep forgetting that these bots aren’t real. They sound human respond fast, and are available 24/7. They don’t judge, but they also don’t understand. And they certainly don’t care.
A study published just last week in Psychiatric Services found that ChatGPT, Gemini, and Claude all had inconsistencies in how they responded to suicide-related queries. The lead researcher called OpenAI and Meta’s new guardrails “incremental steps.” Because that’s what they are. Steps, not solutions.
No pulse, no pause
We’re inching toward a world where people argue over whether AI is sentient, while teens in distress are pouring their hearts out to a tool that doesn’t know what emotions or feelings even are beyond a logical definition.
If this doesn’t scare us into talking about real AI literacy, not just fear or hype, but actual understanding, I don’t know what will. Restrictions matter. But if a person is in crisis, the only useful safeguard is a human who notices and intervenes.
We like to imagine that technology will fill in the gaps we miss. That a tool like ChatGPT, with enough safety controls and policies, might somehow see what we didn’t. But I’ve worked in mental health. I’ve worked in higher ed. I know how easy it is for someone to mask distress, even when you’re looking for it.
So what happens when the listener isn’t human at all? What happens when someone reaches out, and the only response they get is a pattern of words trained from random sources on the internet?
Welcome to The Spotlight Corner 📢
Here is a shoutout to my favorite piece of the week by Carlo Iacono: The Simulation is Already Better Than the Real Thing?
This post ties directly into what I’m talking about—turning to a chatbot rather than another person. He makes a strong case that part of the reason is simple: for a growing number of people, the chatbot feels more comforting than the humans in their lives. It’s both a wake-up call and a condemnation of where we are as a society.
Because for a growing number of people, the reality is they’re more comfortable opening up to a chatbot.
What I appreciate about this piece is that he’s actually drawing attention to that. Because when you’re using ChatGPT, it really does feel like you’re talking to another person. It has a personality. A sense of humor. Or at least, it feels like it does. Most people understand that it’s not sentient, but that line gets blurry fast. And that’s part of the issue.
📌 Still thinking about AI and using chatbots for comfort 🤔Tech Toolbox: Tools I’m Loving Right Now 🛠
My favorite tech tool this month: Glasp
Now I want to switch gears and talk about Glasp. I’ve been loving it because you can use it to highlight directly on websites, your GPT chats, and other online content. You can create custom quotes that you can drop in social media, screenshot, or snip for use.
With the free version everything you mark is public, so be cognizant of that. Anything you highlight shows up on your own Glasp page, and it won’t appear on anyone else’s. But if someone visits your public profile, they’ll be able to see all of your highlights.
If you want privacy, you need the paid subscription.
You can’t highlight in Google Docs or Google Calendar, since those are considered sensitive information. But if you want to highlight specific details or data points online or on articles, it’s a simple, useful tool that adds another layer of interaction without much effort.
The best dictation tool I’ve used ⤵
Wispr Flow Referral Link: wisprflow.ai/r/WISPR6911.
(You get a $15 credit once you hit 2,000 words! Trust me when I tell you that will happen quickly once you get hooked on this thing).
In Case You Missed It! 🔙
👉 My posts 📝from last week:
➠ Please Tell Me You Didn’t Use AI for Medical Questions (I talk about how I used this to help me deal with a yellow jacket attack where I got stung at least 7 times)
➠ Last Week’s Weekly Reboot: (This Week in AI—Scary Good, Actually Scary, and Just Plain Stupid. I think the title says it all here and this one got quite a bit of traction. So much so Substack sent me a little notification to tell me!)
➠ check out my Instagram 😊
And until next week, “Don’t forget to lead with purpose in everything you do.”






Thank you so much for using glasp.co!
It’s a really scary world and I feel it worse watching my daughter grow up amidst this AI explosion that no one has a handle on!