We’re surrounded by AI. From summarizing documents to drafting emails, it’s everywhere. But is this convenience costing us something more profound? Are we sacrificing our cognitive abilities for the sake of efficiency?
For years, I’ve been fascinated by technology, eagerly embracing each new advancement. But recently, I’ve started to question the uncritical adoption of AI, especially its potential impact on our thinking processes.
The relentless pursuit of efficiency, driven by capitalist interests, has led to a concerning trend: the “enshittification” of our digital world. Search engines are becoming less reliable, and online experiences are increasingly frustrating. But the rise of AI, or rather, sophisticated language models mimicking intelligence, presents a new level of concern.
Consider the anecdote from a New York University tech administrator. When asked why they relied so heavily on AI, one student responded, “You’re asking me to go from point A to point B, why wouldn’t I use a car to get there?”
At first glance, it’s a reasonable argument. But what are we losing by taking the easy route? What if the journey itself is more valuable than simply arriving at the destination?
The Perils of Cognitive Shortcuts
Imagine needing to buy groceries. A three-minute drive saves you time, but a ten-minute walk offers benefits beyond mere efficiency. You reduce emissions, get some exercise, and connect with your neighborhood. These subtle interactions enrich our lives in ways we often overlook.
Similarly, in professional fields, journalists are increasingly using AI to assist with research, editing, and data analysis. While some draw the line at AI-generated writing, others embrace it at nearly every stage of the creative process. But is farming out these cognitive tasks ethically sound? Are we losing crucial skills and insights by skipping over essential steps?
I’ve always considered myself a compiler of ideas, connecting disparate concepts to create something new. For example, my investigation into the relationship between long Covid and psychedelics stemmed from linking serotonin abnormalities in the gut with the effects of psychedelics. This process involved weeks of research, interviews, and contemplation – connections that AI couldn’t have made.
That’s because AI, particularly large language models (LLMs) like ChatGPT, doesn’t truly “understand” information. They simply generate responses based on statistical probabilities derived from massive datasets. They imitate intelligence, but they lack genuine comprehension and creative insight.
The Cognitive Offloading Debate
The concept of cognitive offloading, using tools to manage complex tasks, has been around for centuries. From writing notes to saving contacts on our phones, we’ve always sought ways to lighten our mental load. However, this practice has its critics.
Even Socrates questioned the written word, arguing that it could weaken our memory and hinder true wisdom. While calculators, GPS, and word processors have undoubtedly transformed our lives, they’ve also led to trade-offs. Studies suggest that handwriting engages more brain activity than typing, and memory retention is better with pen and paper.
The question isn’t whether AI is useful but what we are losing in the process. What does it feel like to truly understand something, to be creative, to think deeply? These are experiences that are being increasingly skipped.
Satya Nadella, CEO of Microsoft, uses AI to summarize emails and manage his schedule, even listening to AI-generated summaries of podcasts. But is there a point at which these shortcuts diminish our ability to engage with information in a meaningful way?
Sure, you can use AI to skim and summarize information, but what are you missing? If something was worth writing, isn’t it worth reading?
The Dystopian Feedback Loop
Ted Chiang, in a New Yorker article, brilliantly captured the paradox of AI. Language and writing are fundamentally about communication, expecting thoughtful consideration from the recipient. But AI systems threaten to erase our ability to think, consider, and write, creating a dystopian feedback loop of superficiality.
As Chiang asks, “We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement?”
So, before blindly embracing AI, let’s pause and consider the cognitive costs. Let’s rediscover the value of the journey, the importance of deep thinking, and the richness of genuine understanding.