The future often feels closer at the Massachusetts Institute of Technology (MIT) Media Lab, a hub of innovation showcasing everything from tiny robots to AI-designed surrealist sculptures. Here, research scientist Nataliya Kosmyna delves into complex brain states, developing wearable brain-computer interfaces to aid communication for those with neurodegenerative diseases.
Kosmyna’s work involves extensive analysis of brain activity, including a project designing glasses that detect confusion or loss of focus. Approximately two years ago, she began receiving unsolicited emails from individuals reporting altered brain function and memory issues after using large language models like ChatGPT. She observed a swift adoption of generative AI, noting colleagues using it at work and a shift in researcher applications—longer, more formal emails, and candidates pausing, looking away during interviews, raising suspicions about AI assistance. A key concern emerged: how much did they truly understand the AI-generated answers?
In response, Kosmyna collaborated with MIT colleagues on an experiment. They monitored participants’ brain activity via an electroencephalogram (EEG) as they wrote essays with no digital aid, an internet search engine, or ChatGPT. The findings were striking: greater reliance on external help correlated with lower brain connectivity. Specifically, the ChatGPT group exhibited significantly reduced activity in brain networks crucial for cognitive processing, attention, and creativity.
This suggested that despite participants’ internal perceptions, their brains showed minimal activity when using AI for essay writing. An alarming detail from the study was the inability of ChatGPT users to recall what they had just written. “Barely anyone in the ChatGPT group could give a quote,” Kosmyna noted, highlighting a profound disconnect.
Kosmyna, 35, stresses the importance of essay-writing skills—synthesizing information, evaluating perspectives, and constructing arguments—as fundamental to broader life functions. The preliminary study, involving 54 participants, sparked an international media frenzy upon its online release. Kosmyna received over 4,000 emails, many from distressed educators worried that AI is fostering a generation capable of producing passable work without genuine knowledge or comprehension.
The core issue, according to Kosmyna, is an evolutionary predisposition: our brains naturally gravitate towards shortcuts. “Your brain needs friction to learn. It needs to have a challenge,” she explains. Yet, the promise of modern technology is a “frictionless” user experience, ensuring minimal resistance as we navigate digital platforms. This encourages cognitive offloading, where we unthinkingly delegate more tasks and information retention to devices, making us more susceptible to internet “rabbit holes” and increasingly dependent on tools like generative AI.
The collective experience reveals that once acclimated to the hyper-efficient digital realm, the “friction-filled” real world feels more daunting. This leads to habits like avoiding phone calls, relying on self-checkouts, ordering everything via apps, and using phones for basic math or navigation. This trend sparks questions about the emergence of a “stupidogenic society,” a term coined by education expert Daisy Christodoulou, where machines perform cognitive tasks for us.
Concerning trends support this notion. PISA scores for 15-year-olds in OECD countries peaked around 2012, and while global IQ scores rose in the 20th century, many developed nations now show a decline. While these declines are debated, the deepening digital dependence with each technological advance is undeniable. Kosmyna expresses frustration with AI companies pushing products without a full understanding of their “psychological and cognitive costs,” lamenting that only “software developers and drug dealers call people users.”
In this expanding, frictionless online world, individuals are primarily passive, dependent “users.” The rise of AI-generated misinformation and deepfakes poses a significant threat to our ability to maintain skepticism and intellectual independence. The question looms: how much of our independent thought will remain if we become unable to think clearly without technological assistance?
The Paradox of Progress: AI as a Tool or a Crutch?
Historically, concerns about technology’s impact on cognition are not new. Socrates worried that writing would weaken memory and foster a superficial understanding. Yet, writing and subsequent innovations like the printing press and the internet ultimately expanded access to information, leading to greater innovation and collective intelligence. Humans excel at “cognitive offloading,” using external tools to reduce mental load and tackle complex tasks, much like using paper for long division or a calendar for scheduling. In the best scenarios, intelligent people partnered with intelligent machines could achieve unprecedented intellectual feats, as seen in AI’s potential to accelerate drug discovery or early cancer detection.
However, if technology is making us inherently cleverer, why does a pervasive feeling of mental dullness persist? The term “brain rot” was Oxford University Press’s word of the year, capturing the mindlessness from endless scrolling through online “rubbish” and aggressively dumb content. Despite having vast knowledge at our fingertips, we often consume low-value information because digital devices are designed to capture and monetize attention, not to enhance clear thinking. Large parts of the internet, akin to “food deserts,” have become “information deserts” offering only “junk brain food.”
In the late 1990s, tech consultant Linda Stone coined “continuous partial attention” to describe the stressful state of toggling between multiple cognitively demanding activities. This illusory sense of productivity, experienced by many while multitasking, leads to constant alertness, “screen apnea” (holding breath while checking emails), and significant cognitive costs: increased forgetfulness, poor decision-making, and reduced attention. This state of cognitive overwhelm helps explain “brain rot,” where passive consumption dominates because tech companies prioritize engagement over content quality. This is evident in formulaic “casual viewing” streaming content and generic background music, ultimately priming users to act without deep engagement or critical thought.
Generative AI: Outsourcing Thought and Eroding Critical Thinking
Generative AI introduces a new dimension: the outsourcing of thinking itself, not just memory or data processing. In our overstimulated lives, many readily embrace this, allowing AI to draft reports, emails, or holiday plans. This means consuming increasingly “predigested” information, bypassing essential human functions like assessing, filtering, summarizing, or critically engaging with problems.
Michael Gerlich, from SBS Swiss Business School, observed a decline in classroom discussions due to generative AI, prompting his study. He found a correlation between frequent AI use and lower critical thinking scores, suggesting that while AI can be a tool for cleverness, its common use often yields bland, unoriginal, and factually questionable work. A significant concern is the “anchoring effect,” where AI’s initial answer can restrict divergent thinking. Gerlich illustrates this: AI might perfect a candle, but it won’t conceive a lightbulb; that requires human, often chaotic, critical thinking. Without proper AI training in workplaces, companies risk fostering “passable candle-makers” in a world demanding “high-efficiency lightbulbs.”
For adults, a foundation of pre-AI education offers some buffer. However, surveys show that 92% of university students use AI, with 20% using it to write assignments. This raises profound questions for the education system: Is it producing creative, original thinkers, or “mindless, gullible, AI essay-writing drones”?
High school teachers Matt Miles and Joe Clement, authors of Screen Schooled, have observed firsthand how technology overuse impairs students. While phones are banned, laptops still pose distraction challenges. Miles noted a student’s insightful remark: “If you see me on my phone, there’s a 0% chance I’m doing something productive. If you see me on my laptop, there’s a 50% chance.”
Researcher Faith Boninger highlights that while many teachers were skeptical of tech in classrooms before the pandemic, lockdowns normalized ed-tech platforms. Yet, most research touting ed-tech benefits is industry-funded; independent studies, like a global OECD report, show that more in-school tech use correlates with worse results. Wayne Holmes, a professor of critical studies of AI and education, criticizes this as “experimenting on children” with untested tools, a standard we wouldn’t accept for medicines.
Miles and Clement worry about students losing critical thinking skills and deep knowledge, constantly seeking quick answers. Clement argues that “being able to Google something and providing the right answer isn’t knowledge.” Without a deep understanding, individuals are poorly equipped to discern truth from misinformation, especially in an internet increasingly polluted by AI-generated fakes. Miles recounts his son’s frustration with an inflexible online math program, which couldn’t accept alternative, yet correct, solutions. This anecdote underscores a critical point: the real “nightmare” of widespread stupidity might not stem from submitting to super-intelligent machines, but rather from entrusting our cognitive processes to “dumb ones” that stifle human ingenuity and flexible thought.