Might We Harm Our Intelligence Using LLM AI? Maybe
Notes From the Desk: No. 46 - 2025.06.25
Notes From the Desk are periodic informal posts that summarize recent topics of interest or other brief notable commentary.
Might We Be Harming Our Own Intelligence Using LLM AI?
Recently, there have been several papers challenging the idea that LLMs are intelligent or could become intelligent. Now we have evidence that these unintelligent machines may be detrimental to our own intelligence. Is our cognitive ability harmed simply by using AI? It appears that this is the case.
Your Brain on ChatGPT: Accumulation of Cognitive Debt
A new study, “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task”, shows us some of the losses we might experience when we use AI to assist with our tasks. Participants in this study were given the task of writing an essay on topics taken from SAT tests. The participants could choose their topic from a set of 9 topics.
The participants were not given instructions on how to use the LLM; they could use it in any capacity to assist with the task. However, what seems fascinating is that some participants stated they used the LLM only to assist and not to write the essay.
Within the LLM Group, six participants valued the tool primarily as a linguistic aid; for example, P1 “love[d] that ChatGPT could give good sentences for transitions,” while P17 noted that “ChatGPT helped with grammar checking, but everything else came from the brain”.
But the difference in performance was significantly different, beyond what would be expected if the tool was only used to check grammar.
None of the participants in the LLM group (0/18) produced a correct quote
The next question was about recognition of the prompts. … Unsurprisingly, all but one participant recognized the last prompt they wrote about, from Session 3, however, only 3 participants from the original LLM group recognized all three prompts (3/9).
Our Brains Disengage Using AI
It appears that using AI, even as only an assistant, disengages our brains to a capacity greater than would be expected. Or is it that participants were either not honest about the degree to which they used AI in their writing, or they themselves weren’t consciously aware that they leveraged AI more than they thought?
Some participants stated they felt guilty about using AI, that it was like cheating or unethical. Possibly this skewed their own perception of how much they used AI, or they felt uncomfortable acknowledging how much they used AI. I didn’t find where the paper examined the possibility that there might be a mismatch between how users stated they used AI and how they actually used it.
In conclusion, the directed connectivity analysis reveals a clear pattern: writing without assistance increased brain network interactions across multiple frequency bands, engaging higher cognitive load, stronger executive control, and deeper creative processing. Writing with AI assistance, in contrast, reduces overall neural connectivity, and shifts the dynamics of information flow. In practical terms, a LLM might free up mental resources and make the task feel easier, yet the brain of the user of the LLM might not go as deeply into the rich associative processes that unassisted creative writing entails.
AI Makes Us All The Same, We Lose Our Diversity
Another finding in the paper is the conformity effect of using AI. Contrary to advocates’ claims of a machine that enhances creativity, it is a machine that can be destructive to our diversity of thought.
We found that the Brain-only group exhibited strong variability in how participants approached essay writing across most topics. In contrast, the LLM group produced statistically homogeneous essays within each topic, showing significantly less deviation compared to the other groups
Are These Harmful Effects Avoidable? Maybe
A critically important result within the study was the fact that when the participants swapped tasks, the LLM group was required to use their brains only and the brain-only group utilized AI, the original brain-only group didn’t exhibit the negative cognitive effects seen with the original LLM group.
Brain-to-LLM group, exhibited significant increase in brain connectivity across all EEG frequency bands when allowed to use an LLM on a familiar topic. This suggests that AI-supported re-engagement invoked high levels of cognitive integration, memory reactivation, and top-down control. By contrast, repeated LLM usage across Sessions 1, 2, 3 for the original LLM group reflected reduced connectivity over time.
This suggests that it is necessary to fully engage our brains on our own without AI for a substantive amount of time prior to seeking out AI for assistance with cognitive tasks, if we wish to maximize and preserve our greatest mental capacity.
AI Tools in Society: Impacts on Cognitive Offloading
A different previous study, “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking”, further supports the concerns of diminishing cognitive abilities due to the use of AI.
The correlation analysis and multiple regression results indicate a significant negative relationship between AI tool usage and critical thinking skills. Participants who reported higher usage of AI tools consistently showed lower scores on critical thinking assessments.
AI Dependence
Participants frequently reported a high reliance on AI tools for routine and cognitive tasks. For instance, one participant noted, “I use AI for everything, from scheduling to finding information. It’s become a part of how I think.” This theme aligns with the quantitative findings on cognitive offloading, highlighting how AI tools serve as cognitive substitutes rather than supplements.
It should be somewhat apparent that when you stop doing something, you become less proficient at it. Technology that makes you more productive at a task is substantially different than a technology that replaces the task entirely. AI is replacing the task of thinking altogether in many scenarios.
Outsource Your Skill And Lose Your Ability
Whatever you use AI to do, you will become worse at. If you value a skill, you will lose it if you surrender it to the machines. The worst skill you could possibly outsource is thinking. If the youth use AI too early or too much, they may never develop the skills necessary to reason deeply about the world. What are the long-term consequences we still have not considered?
Will we have the discipline to use AI for the narrow purposes that are constructive without collateral damage to our own minds? With most things in life, “easy mode” only creates garbage. Therefore, it is somewhat self-limiting, as it is obviously recognizable as low quality and becomes undesirable. However, with AI, “easy mode” is still garbage, but it looks immensely impressive upon cursory inspection. It can deceptively pass as high quality, incentivizing mass adoption of “the easy mode,” where individuals rely on AI more and more to create the substantive portion of content.
Just as AI for art has created many humans pretending to be artists, AI for research may be simply creating humans that pretend to think. In the end, if this trend continues, we will become helpless, unable to think or create for ourselves. And since the machine cannot create anything new on its own, we become trapped in a society that is frozen in time, consuming nothing more than new permutations of everything that already existed.
Use Your Brain First And AI Last
If there is a proper use of AI to assist with cognitive tasks, it must be the final step in the process. As the first study demonstrated, early use of AI can short-circuit your reasoning process, and continued avoidance of exercising your mind can only result in a weaker mind.
Interesting Note:
I attempted to use all of the SOTA models to search for information in the first paper above, which is over 200 pages in length. I’ve never attempted to do so with such a large paper, but despite all of the models having adequate context size, none could find or cite any information in the paper accurately. The results were nearly 100% hallucinations.
The Battle Between AI Theft And Copyright
You have heard the dystopian future often promoted as “you will own nothing and be happy”. Of course, someone will own all the things; it is just not you. The following are two recent articles covering the issue and the bleak future we face when we lose control of everything we create, and our contributing value is decided not by people, but by algorithms. Billions of dollars are putting pressure on institutions and governments to surrender it all to AI.
Philosophy for Rebels:
A few poster images I recently created: Feel free to share on social media.
Mind Prison is an oasis for human thought, attempting to survive amidst the dead internet. I typically spend hours to days on articles, including creating the illustrations for each. I hope if you find them valuable and you still appreciate the creations from human beings, you will consider subscribing. Thank you!
No compass through the dark exists without hope of reaching the other side and the belief that it matters …
(re)engineering humanity....
It's all garbage and I want it out of my face.
I wouldn't trust it with anything, not even grammar and spelling, because it can't understand the rules or context. There's nothing an LLM can do for proofreading that didn't already exist.
Mindcucks. LLMs are not a "force multiplier" for a thinker, they're a garbage multiplier. I can't think of anything more horrifying than averaging the text strings of the entire human race and regurgitating that in the hopes that it will improve anything.