How Close are We to Digital Dementia?
- Joshua Russell
- Dec 28, 2025
- 3 min read

Large Language Models (LLMs) like ChatGPT, Claude, and Gemini have taken the world by storm over the past 2 years. I imagine almost everyone reading this newsletter has at least experimented with one LLM; probably many more of you have incorporated LLMs as efficiency tools broadly into your lives.
Digital dementia was coined by German psychiatrist Manfred Spitzer in 2012 as a phenomenon whereby the brain loses capacities when that brain’s owner relies excessively on technology. We’ve all experienced this. Before cell phones were commonplace, we knew the phone numbers of our entire family and all our closest friends by heart. Now it’s rare for me to meet anyone who has more than their spouse’s phone number memorized. Similarly, with the rise of GPS, it is increasingly common to hear stories of otherwise high-functioning individuals who routinely struggle with navigating their way through the city in which they live.
These examples of digital dementia represent limited domains of function loss, but what happens when we outsource our critical thinking wholesale to LLMs? LLMs being used to generate text rather than for research or brainstorming has become commonplace. Unlike remembering phone numbers, writing involves all of our critical thinking faculties. In fact, it may be the closest facsimile to thinking itself; as Flannery O’Connor observed, “I don’t know what I think until I read what I write.”
In a study titled “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task", the researchers monitored participants' (MIT students) brain activity while they completed writing tasks, comparing those who worked unassisted with those who used an LLM.
The findings were stark. Over merely 4 months, the researchers found that subjects had:
• Reduced Neural Engagement: Participants using LLMs showed a 37–41% decrease in cortical engagement. Their brains were measurably less involved in the cognitive work of composing text.
• Impaired Memory and Originality: LLM users exhibited a 28-point decline in recall of the text they produced and generated essays that were more uniform and less original. They reported a weak sense of ownership over the final product.
• Lingering "Cognitive Debt": This cognitive under-engagement persisted as a lingering effect, even after participants stopped using LLMs.
These findings draw a direct and alarming parallel to the concept of "Digital Dementia" in medicine. Our brains are like muscles, not hard drives; they strengthen with active use and atrophy when they’re allowed a resistance-free path to task completion. If we are consistently offloading core cognitive work—like diagnostic reasoning or clinical synthesis—to digital tools, we risk, in short order, the atrophy of our clinical reasoning.
As the automobile became widespread, there was an obvious decline in the average physical fitness of populations who relied on internal combustion engines for transportation. Similarly, the LLM and other AI technology offer convenience, but at a cost. Just as it is incumbent on every individual to monitor and attend to their own physical fitness, we are entering an era where similar ownership of mental fitness will be required for each of us to avoid the loss of our cognitive faculties.
This will require a shift in mindset for many. None of us would expect that we’d maintain our strength if we laid in bed for several weeks straight, but the same awareness about the threat of losing thinking functions with even relatively short periods of mental laziness is much less prevalent.
The central warning of the study is clear: the danger isn't that AI will replace clinicians (or other knowledge workers), but that clinicians who over-rely on it will lose the uniquely human skills of judgment, context, and doubt that are our primary value propositions.
Movie Recommendation:
Idiocracy (2005) - for an eerily prescient depiction of the phenomenon, see the hospital scene at 14:30 mins.



Comments