Apparently, Working Smarter is the New Lazy
New research finds people suddenly don’t trust coworkers using AI

I write about leadership, exploring AI, and why teaching critical thinking about it is more important than ever.
Please hit the heart ❤️, restack 🔄, subscribe 📨, and all that jazz to help spread the word!
🙌
There’s a stretch of concrete just inside my garage where the door comes down. A few weeks ago, I noticed it had turned into a high-traffic super ant highway. Dozens of them. Maybe hundreds. A steady stream, traveling in a clean, perfect line from one end of the garage to the other. No chaos. No confusion. Just a highly organized infrastructure driven by pure survival instinct.
I’ve had ants before, and they are fascinating creatures to watch until they show up uninvited, like distant relatives with zero boundaries. I’d dealt with them years ago, learned they were pavement ants, and remembered that they loved peanut butter. So I mixed some with borax, laid out a little bait in a lid, and left it near the trail.
Within a couple days, the steady line was a mess. No more rhythm. No focused route. Some of them literally moving in circles.
They were much smaller now, and more of them looked drunk than purposeful. They wandered in loose loops, bumping into each other, veering off course, spinning like they weren’t sure what they were doing or why.
Because they weren’t. The poison had started working. The older ones were dying off. The new ones were confused, under-trained, unequipped, but sent out anyway. The colony was disoriented. The whole structure that had made them so efficient had quietly started to collapse.
One thing I’ve noticed about ants: if you put a big glob of bait right in their path, they often avoid it. They’ll walk around it, and ignore it completely. But if you leave a small dab or a little drop right in their line, they find it immediately.
That’s what it looks like when a system begins to break down.
AI feels a lot like that giant dollop dropped right in the middle of our lives. Too sudden. Too disruptive. So many people are stepping around it. We need to find ways to offer it in smaller doses to help them engage.
And I’ve seen all of this before but not with ants or AI.
The social cost of using AI at work
A recent study published in Proceedings of the National Academy of Sciences looked at how people perceive coworkers who use AI. The results weren’t subtle. Even when AI users produced better work, they were seen as less competent, less hardworking, and less self-sufficient.
The resentment wasn’t about quality, but the tool they were using and what it represented: efficiency without performance theater. The idea that someone could quietly use a process, get their work done faster, and not act like it nearly killed them… apparently that’s threatening.
Across four experiments with more than 4,000 participants, the findings were consistent, and frankly, a little disheartening.
People assumed that using AI made you lazy, replaceable and less competent. Employees were so worried about what co-workers would think they actively chose to hide the fact they used it.
In the first experiment, participants predicted they’d be labeled negatively if they used AI, so they were less likely to disclose it.
The second confirmed what they feared: workers who used AI were rated as lazier, less competent, and more dependent than those who didn’t, or who got help from another human.
These judgments held across gender, age, and job type.
The third experiment focused on managers and found those who didn’t use AI themselves preferred employees who also avoided it. But managers who did use it? Well, they favored individuals who also used it to the same degree.
The most fascinating part was that performance ratings between AI and non-AI users were, on average, the same. That suggests they were doing their jobs equally well. But those ratings shifted depending on the manager’s own experience with AI. So the bias wasn’t with the candidate. It was with the person doing the evaluating.
The fourth experiment dug into the why. The biggest driver? Perceptions of laziness. That one assumption lowered ratings on fit and hiring potential.
But here's the biggest twist: employees who used AI regularly didn’t criticize others for using it. And when AI clearly fit the task, especially for digital work like sending out customized mass emails, the negative judgment disappeared completely. In some cases, the AI user was even seen as a better fit. For manual tasks, though, the stigma remained.
So to break it down:
AI users were viewed as being lazy and less diligent, even when their performance was just as good as their colleagues who didn’t use it.
Managers who used AI favored employees who also used it. Managers who didn’t, preferred those who also avoided it.
And there were clear ideas about which tasks were acceptable for AI and which ones weren’t.
🎧 Want to hear more? I ran the original research paper through NotebookLM’s Deep Dive Studio, which generated an audio summary in podcast format. It’s a surprisingly cool tool with two hosts that sound remarkably human. If you’re curious about the study or just prefer to listen instead of read, it’s a great option.
So what does this mean?
It means that while AI might make you more efficient, it might also make people think less of you. Your boss’s opinion of AI might matter more than your results. We’re not just dealing with new tools, but with the emotional fallout of what those tools represent.
Even if you use AI responsibly and thoughtfully, someone else might still see you as trying to cut corners.
But underneath it all, is this really about AI? Or is it about any framework that disrupts the unwritten rule of shared inefficiencies and bureaucratic red tape? If you find a way to move faster, with fewer steps, especially if you don’t ask permission, you’ll trigger something. And if you don’t apologize for it? It’s even worse.
I know because I experienced it long before anyone ever heard of ChatGPT or AI in the mainstream.
When efficiency makes you a threat
Years ago, when I worked in higher ed, we were relocated to a different building while our offices were being renovated. My new office was at the very end of a long hallway, with the student files all the way on the opposite side. So every time a student walked in, I had to get up, walk the entire length of the hall to retrieve their file, bring it back to prep it, then walk back down again to get the student.
The process was fine for returning students, since their folders were already there, but for new students, it was pointless. They didn’t have folders yet. All I had to do was grab a blank advising sheet and write their name on it and then go get them. That was it.
So I made a suggestion. I asked if we could designate students as new when they signed in, so we’d know not to waste time walking back and forth. We could make their folders ourselves. It seemed simple enough, and that tiny shift would save time and steps.
What happened next was next level because they lost their minds. You’d think I was asking them to revamp the entire office and rewrite our job requirements.
One person even said in a meeting I wasn’t in, because of course. “So now we all have to change our process because someone’s too lazy to walk down the hall?”
Lazy. That was the word they used.
No, he didn’t but yes he did
Never mind that I was always ahead on my emails, stayed late without complaining, ran my advising sessions efficiently, and created multiple resources (a blog, handouts, FAQs) that actually saved students time.
Forget that I often caught things with graduation issues and missed prerequisites because I wasn’t constantly putting out fires. I wasn’t booked solid every hour because I had built systems that prevented those panics in the first place. And since I wasn’t running around like everyone else in a panic, because I looked calm, and had time to think, they assumed I wasn’t working hard enough.
I was punished for being effective. Let that sink in for a minute.
And this wasn’t about writing a name on an intake form. The folder was just the disruption. One of many I would face in my career. It was because I was always shining a light on busyness. I refused to participate in the chronic overwhelm and chose to create better workflows instead.
That made me a threat to the status quo. I learned quickly that my coworkers’ survival instincts kicked in fast when that happened, and they weren’t always rational or logical when it did.
Sound at all familiar?
New tools meets the status quo
When I read the study about ChatGPT, I experienced some flashbacks. It didn't matter that people were getting more done or that the quality was high. Even when using AI reduces stress and helps to streamline tasks and solve real problems, using it somehow creates a negative perception.
When someone finds a better way of doing something that is faster and more effective, they aren't always met with curiosity or gratitude. Many times, they're met with side-eyes, passive-aggressive behavior, and resistance. Leadership notices and pulls you aside to express their concerns, reinforcing just how deeply the dysfunction runs.
Regardless of your personal feelings about ChatGPT, the tool exposes things that are already broken: the worship of busyness, the hostility toward change, the inability to distinguish performance from value.
It also exposes the erosion of critical thinking, the nostalgia for a time that felt simpler but probably wasn’t, and the growing mental health and loneliness crises we keep trying to ignore or outwork.
AI doesn’t make people lazy. It makes inefficiency visible. And that visibility is threatening, just like it was when I exposed it by asking to change the file folder protocol.
The disorientation is real and revealing
When I watched those ants stumbling around the garage, I realized just how hard they were trying to follow a routine that didn’t work anymore. They relied on instinct, not strategy. And when the conditions changed, they didn’t know how to adapt.
That’s what we’re seeing in a lot of workplaces, k-12, higher education, and everywhere else right now.
AI is disrupting entire industries from copywriting to project management to how students learn. People are reacting with fear, denial, overconfidence, avoidance and everything in between. They’re trying to follow old patterns in a new environment. Some are acting like they’ve got it all figured out. Others are desperately hoping it will all go away. Most of us are just stumbling around, trying not to feel obsolete.
I’m not a developer, and I’m most certainly not a tech expert. But I’ve been using ChatGPT for months to research, troubleshoot, and experiment. I use it to help me figure things out and not because I’m trying to take shortcuts or because I’m lazy. I would rather spend time thinking than reinventing wheels.
I used to get side-eyed for all of my ideas to streamline and make things more efficient. Now I get the same treatment for using tools that help me stay that way. And the end result is that I’m still somehow lazy.
I guess some things never change.
Learn to walk a new path or keep wandering
I don’t think AI is going to destroy us, but it is testing us. It’s going to make clear who’s willing to adapt, and who’s still walking the same road even though it doesn’t go anywhere anymore.
Like the ants, some people will keep following the path long after it stops working. Some will blame the trail. Others will blame the ants who found a better way.
And a few of us will keep building, learning, and trying. Probably still getting called lazy for it. Oh well. At least we’re efficient.”
Curious about GPTs?
Curious about the custom GPTs I built? DM me to see what they can do for advising and career exploration for students ⤵
© 2025 Bette A. Ludwig: All rights reserved
👉 Don’t Forget to Evaluate Your Leadership Approach with This FREE Assessment
If this post gave you something to think about, please tap the ❤️ and share it with your network. It helps more people see these ideas and keeps me creating. Thanks so much for your support! 🙌



I am mastering the new lazy and the old lazy.
The status quo strikes again. Results matter more than how you get there (assuming everything legal and ethical).
For many AI is a threat. For some it’s a tool. But there are those that make it a lifestyle. I’ll not judge, but I can see plenty of possibilities.