A framework to stop A.I. from rotting your brain
New studies reveal its risks. And offer clues on how to mitigate them

There’s a growing pile of evidence that automation is bad for your brain. Letting AI write your emails and summarize reports apparently carries a cognitive cost.
The short-term cost is weaker neural activity, weaker retention, and work plagued by errors you cannot explain or defend. The long-term cost? That’s anyone’s guess.
New research is introducing terms into our vocabulary like cognitive debt, and epistemic debt, and cognitive surrender.
Here I’ll summarize what these studies say; what they don’t; and propose a mitigating strategy – for more mindful, more selective, use of AI.
Step one is understanding the problem. This concept of cognitive debt. To quote the old G.I. Joe slogan: knowing is half the battle. When people, especially young people, know the risk, they might at least make conscious choices.
That’s not to say I’d stop using these tools. In fact, this is actually a point of agreement among the researchers I’m about to cite.
They aren’t proposing we quit AI, since using it will be non-negotiable in some cases, and some fields. They’re proposing we use it better.
What these studies point to is the need for friction; guardrails that slow us down before we careen recklessly into the canyon of cognitive decline.
I’ll show you my own method below. In the meantime, let’s summarize the latest research on the scope of the problem. And, most importantly, let’s commit to discussing this with people we care about.
They need to know this. They should aware that a flurry of new reporting backs up a two-year-old Wharton study showing students who used ChatGPT for math under-performed peers when AI was taken away.
Study 1: Introducing the term ‘Cognitive debt’
The term cognitive debt was popularized by a newer MIT study titled, Your Brain on ChatGPT. It’s like a credit card: Buy now, pay later.
You can order any summary you want, any email draft, any graphics, quickly and easily, like walking into a luxury store and leaving with a designer handbag just by flashing a piece of plastic. The bill comes later, however. Then the collection agency starts knocking at your door.
This study tracked the debt.
Researchers asked dozens of participants to write essays in three separate sessions. Each time participants were interviewed, graded by AI, and plugged into EEG machines that monitored their brain activity.
Participants were broken into groups — one that used AI; one that used search engines; and one relying only on their brain. In a fourth session, study participants were asked to write an essay without tools.
The result? The better the tool, the lower the brain activity, with the worst-performing group being the one using AI, which registered 55% lower dDTF signal flow between brain regions compared to the no-tool group.
That’s not all.
People who used their own brain or search engines could all quote their own essay; yet 83% of the LLM group reported difficulty in Session 1, which briefly improved in Session 3 to a still-poor 33%.
Then their performance collapsed in the final session. In Session 4, they failed at a rate seven times higher than other participants in quoting what they’d written.
Now imagine this is a boardroom. You’re presenting a career-changing project, and you can’t answer questions on the fly. The collection agency has arrived.
Here’s what we don’t know: whether such damage is only temporary or long-lasting. The authors called for further study.
Study 2: ‘Epistemic debt’ in vibe-coding
A Machine Learning scientist who was until recently a researcher at Amazon monitored 78 people using AI for computer programming – vibe-coding. The findings were released last month.
He tested a theory: that adding guardrails, friction in the process, forces thinking — and thinking forces real learning.
He made some participants answer questions about their project part-way through. Then he took away everyone’s AI, and had them fix problems in the code.
Surprise: people who faced friction earlier, those forced to answer questions, fared way better than those who just sailed through using AI, with a 39% failure rate in the final task, compared to 77%.
Author Sreecharan Sankaranarayanan calls it epistemic debt: you’ve created something you do not truly own. You may own it legally, but not cognitively, meaning you’ll flop when challenged.
His advice? It’s the same as the MIT study: Add friction earlier in the process.
He also suggests applying a framework: identifying different types of cognitive work, he urges programmers to be careful about which type they outsource to AI.
I’ll share my own general-purpose framework in a second.
Study 3: ‘Cognitive surrender’
Researchers at Wharton tested 1,372 people on nearly 10,000 tasks, in a paper released last month whose title plays off the famous work of decision-making psychologist Daniel Kahneman.
They gave people access to a chatbot. Crucially, they fiddled with the accuracy of different chatbots, to test who was over-relying on them.
Subjects used the chatbot more than half the time, and those who did relied on its answers 80-93% of the time.
Here’s the fun part: accuracy of the answers rose 25% when the AI was right and dropped 15% when it was wrong, which authors called evidence of cognitive surrender.
This means people stopped thinking, and relied blindly on a bot.
Worse yet, people who used AI, and who had most confidence in AI, remained confident as it kept screwing up.
Now here’s a modest silver lining. It connects to the framework below, and in fact connects to a common lesson in all these studies: Adding checkpoints to the process helped. That friction we discussed earlier.
Researchers gave one-third of participants incentives to improve their answers: 20 cents per correct answer, and an entry in a $20 lottery, and they received immediate feedback for each correct/incorrect answer.
That instant feedback paired with tiny financial incentives improved performance a bit — the override rate on faulty AI outputs rose from 20% to 42%. Accuracy grew.
The takeaway here is that there is no magic solution but adding cognitive checkpoints improves accuracy, as it did learning in the coding study.

My cognitive checkpoint
I’ve created a matrix that makes me more mindful about my own use of AI – it shapes my thinking about when to use it.
Before doing the ol’ <click> <drag PDF> <summarize> I ask myself two questions:
Will I have to own this knowledge? How likely is it you will need to understand, explain, defend this someday? Maybe tomorrow, maybe in two years.
Do I have time to do this without AI? In the pre-2022 era, would you actually have done this work? If so, maybe you still have the time.
If the answer to both questions is, “Yes”, then just read the damned thing. Or write the damned thing yourself. AI risks hurting you here.
If the answer to both is, “No,” that’s a perfect job for ChatGPT: low risk, and pure bonus — you weren’t getting to this anyway.
What if the answers are mixed, or unclear? Those are your edge cases — your dilemmas, the yellow zone in the above graphic.
To be clear, I’m not proposing you sit in front of a chart every time you open a chatbot. I don’t.
But I now systematically ask myself these two questions, and this encourages mindful use, one case at a time.
Here are a couple of examples.
When to use AI: Low need to own, low time to read. In my final months in Washington, I would throw every new Supreme Court decision into a chatbot; request a summary; and ask for potential implications. Just to see if there might be a surprising story in there. I would never have read them all on my own. There is just no way I had time to read hundreds of pages of court decisions each day, atop my regular duties, especially given that a minuscule fraction of these cases meet the bar for an international news headline. I did not have to own this knowledge; I did not have time to acquire it. AI was simply a bonus here. It ran scouting missions for me — a tip service.
When not to use AI: High need to own, have time to read. When I actually wrote about a court decision, I read it. I needed to know this. Readers and editors might have questions. And I had time to read it, if it was for an assignment. Take this landmark decision: Trump v. the United States, which clarified rules for when presidents can be charged with crimes. In the initial reaction, there was talk that the black robes gave a green light for presidents to commit murder. I read it, and spent days speaking to leading constitutional experts about the nuances. This case matters – it’s still shaping events in Washington to this day. I obviously read it. Now let’s consider a less dramatic example. This report: the U.S. National Institute for Standards and Technology’s Artificial Intelligence Risk Management Framework. Looks boring as hell, right? Yet this might be the most useful document for any organization looking to safely adopt AI. It is foundational. I’m studying this issue and want to understand it for years to come. I read it – it maybe took a strong coffee to get through, but I read it.
Edge cases: Sometimes it’s a coin toss. A judgment call. Take podcasts. I don’t have time to listen to every one I’d like. Some are too long, technical, or peripheral to my interests. I’ll get a transcript, summarize it, and chat the AI questions before deciding whether to listen. Some, I truly want to savor. Like Trump’s former AI advisor appearing on Ezra Klein. These are two smart people having one of the best conversations I’ve ever heard about AI, but more than that: it’s about extraordinarily important topics in the field, from the Anthropic-Pentagon dispute, to the gathering conditions that could enable a police state. I’ll write about this soon. I did not want to outsource this conversation. Your own judgment has always mattered in deciding what to read, or listen to — it still does.

‘These aren’t moral failures’
One of the studies above quotes an eminent American-Dutch cognitive psychologist and expert on the science of learning, Paul Kirschner.
He’s described the difference between offloading and outsourcing thought. Offloading means a tool helps you think – a word processor. Outsourcing means the tool does your thinking.
Outsourcing, he says, has a cost. People who rely on GPS don’t navigate like they used to. Search engines make people remember less, and calculators erode arithmetic skills.
Kirschner insists all these tools have their purpose — even AI: “These aren’t moral failures,” he writes of using LLMs.
But there is a cost, and AI imposes this cognitive cost on a grander scale than arguably any previous technology in human history.
The cost: that our brains stop exercising, to employ a metaphor Kirschner dislikes.
“They become weak and dependent,” he writes. “Minds adapt to the workload they’re given. I hate to use the muscle analogy when talking about the brain, but if you’re sick and bedridden, your muscles atrophy from lack of use.”
All these studies I’ve referred to have something else in common. None of their authors have stopped using AI. In fact, AI is present in all this work. It’s used in all the studies I mentioned.
The Your Brain on ChatGPT paper uses an AI agent to judge study participants.
The paper is even written to get LLMs to read it carefully. The paper carries instructions telling LLMs which section to read first, which appears to be a clever way to force relevant context atop the context window, as LLMs tend to best remember the beginning and end of conversations — not the middle.
As for the coding study, it uses an LLM to query participants — in that friction point. Same for the cognitive surrender paper; it even mentions beneficial use cases for AI.
Kirschner himself writes: “AI isn’t going back in the bottle.” The real question, he adds, isn’t whether we’ll outsource: “The question is what we choose to outsource.”
In my own life, I’ve enjoyed this new ability to conduct research on a broader scale, and visualize it. To build apps and personalized tools.
But I do believe we need to be selective about how we use these tools. And, especially, to drill this awareness into young people. They have the most at stake.
What I’ve presented here is a crude instrument, an attempt to force more deliberation about AI. Of course, other factors come into play when deciding what to read.
Factors like: am I enjoying reading this? If so, maybe ditch the borg. The joy of reading — some things are just too precious, too human, to outsource.



