Expired on Arrival
When six months is already ancient AI history

I write about leadership, education, AI, and why teaching critical thinking about it is more important than ever.
Please hit the heart â€ïž, restack đ, subscribe đš, and all that jazz to help spread the word đ
Recently I had a short exchange with Avi Hakhamanesh, who also writes about AI here on Substack. She mentioned that she prefers not to use research that isnât more than six months old.
I replied that I felt the same way and was uncomfortable using anything more than three or four months out.
If you spent years in academia, thatâs a strange reality because citing work that is a few years old is completely normal.
But we all know that AI moves at a completely different pace.
And that exchange highlights a strange paradox in AI writing right now.
When thinking overheats
Weâve all read the articles on how AI is leading to even more time to increase productivity which is starting to have its own side effect. A study in HBR just came out calling it brain fry when individuals find themselves feeling like theyâre in a fog with difficulty focusing after heavy AI use.
The constant and rapid shifting between problem-solving and execution mode is heavy cognitive load. Hours upon hours of non-stop context switching like this becomes exhausting.
Anyone deep in the weeds understands how quickly the technology moves. The interfaces are constantly changing. What felt current six months ago can already be completely outdated. Where that gets fascinating fast is how many of the biggest conversations about AI are being built on ideas that can be more than a century old.
Wait, what???
The iron cage
A good example is the article that went viral earlier this year: How AI Destroys institutions. The argument sounded very current and urgent, but the intellectual backbone of the piece drew on classical sociology, including Max Weber, who died in 1920.
Weber wrote extensively about bureaucracy and the way large institutions become rigid systems built on rules and procedures that end up trapping those working in them. He called it the iron cage.
That work is over a hundred years old. Yet people are using it to explain what is happening with AI today.
So we end up with this odd contradiction where someone will say research from six months ago already feels outdated. And then others cite theory from 1905.
Well, there are multiple things going on here, and this is definitely one of the tensions that education is experiencing and why the reaction has been swift and strong. This is because AI challenges many deeply ingrained structures within the system.
One that rarely gets talked about is the publication cycle.
Anyone who has worked in higher ed understands the phrase âpublish or perish.â When you look at that in the context of AI and how quickly it moves, and how everyday people can barely keep up, you start to understand why large bureaucratic institutions react so viscerally.
Books about AI are outdated before you open them
The traditional publishing cycle runs somewhere between eighteen months and two years from finished manuscript to actual publication. In most fields thatâs fine. But AI is a different animal entirely.
A book written about AI two years ago isnât just missing recent developments. Itâs most likely operating on assumptions of what AI could do and couldnât do, that are completely wrong now.
Youâre not just buying a slightly dated overview; youâre buying a document about a world that no longer exists, packaged to look like current thinking.
And itâs about to get worse. There are rumors that OpenAI is planning to pump out monthly updates so by the time a book reaches the shelf, the product it describes may have been updated dozens of times. The company could be moving in a completely different direction. That means entire chapters may describe capabilities or limitations that have already been reversed.
Monthly updates versus an 18-month production cycle is a divide as wide as the Grand Canyon. It doesnât even live in the same universe.
And books are only part of the problem.
Academic research moves on its own timeline. Studies have to be designed, funded, conducted, written, submitted, reviewed, revised, and eventually published. Peer review alone can add months or even years before a paper finally appears in a journal.
By the time a study on AI is published, the systems it examined may literally look like AI ancient history.
Are the research and books still valuable? Perhaps, but when the underlying data is already obsolete, the assumptions youâre making based on them are incomplete at best and misleading at worst..
So where does that leave us
Honestly? Weâre all improvising.
Sometimes theories like Weberâs still technically work. The old thinking feels familiar. Maybe even comforting. But comfortable and useful are definitely not the same thing. I would even argue further that sometimes itâs exactly what keeps us stuck.
Academics move at the pace of committees and peer review. Publishers run on contracts, edits, and production schedules. Practitioners like Avi and me are working on something closer to real time, and even that feels like showing up late to a party that already moved to a different address.
Current research isnât just a practical inconvenience. Itâs a neon flashing sign that the thing weâre trying to understand is moving at a pace that we never could have anticipated.
And sadly, Weber would probably find all of this completely predictable, even a hundred years later.
Paid Subscriber Resource
The AI Source Vetting Guide is a resource available exclusively for paying subscribers. It provides a quick framework for evaluating AI articles, books, studies, and expert claims before you cite, share, or build on them.
Your access link was included in your welcome email. If you canât find it, check your spam folder or send me a direct message here on Substack or LinkedIn and Iâll send the link again.
© 2026 Bette A. Ludwig: All rights reserved





Many things can be true at once (maybe). I donât trust AI related research that is more than six months old either.
Yet the behaviors, cultural norms and constraints of large scale enterprises can still be rooted in that Iron Cage.
The tech is moving fast, as well as related social implications. However, basic human behaviors, needs, and motivations change very little. The trick is balancing the two areas.