The burgeoning enthusiasm surrounding Artificial Intelligence (AI) among tech industry leaders is undeniable. Yet, beneath the surface of innovation and progress, a critical question emerges: is this fervor inadvertently paving the way for an anti-human future? One perspective suggests that the deeply rooted philosophical underpinnings of this AI fanaticism can be traced back to the European Enlightenment.
Tech Titans and the Future of Humanity
Recent high-profile summits, such as the ‘Winning the AI Race’ event, have offered a platform for prominent figures like Nvidia CEO Jensen Huang and AMD CEO Dr. Lisa Su to champion AI’s transformative potential. While their optimism is expected, a closer listen reveals a subtle, yet profound, shift in perspective regarding humanity’s role in an AI-driven world.
Dr. Lisa Su’s remarks on “revitalizing” educational curricula for future generations, with a stronger emphasis on science and technology (STEM), have sparked debate. While seemingly benign, these comments resonate differently when juxtaposed with observed trends in higher education, particularly in the UK, where humanities departments face significant cuts. This suggests a potential zero-sum game, where the advancement of STEM comes at the expense of humanistic studies—disciplines traditionally focused on understanding the human condition.
Adding to this narrative, Jensen Huang’s assertion that AI acts as the “great equalizer,” transforming “everybody’s an artist, now. Everybody’s an author, now. Everybody’s a programmer, now,” raises fundamental questions about the nature of human creativity and authorship. Can an AI content generator truly be deemed an author? Is an AI image prompter an artist?
Imagination vs. Execution: A Core Debate
This perspective is echoed by figures like Fidji Simo, OpenAI’s new CEO of Applications, who suggests AI “collapses the distance between imagination and execution.” Simo envisions a future where the ability to conceive an idea, regardless of one’s capacity to physically create it, is sufficient for artistic recognition. This implies that the sheer act of imagining is equivalent to the painstaking process of creation, a notion that seemingly devalues the craftsmanship and skill inherent in human endeavor.
Such a viewpoint, which appears to hold disdain for genuine human creativity—the act of truly making something rather than merely conceiving it—seems to align with a reduced interest in the humanities. The humanities, by definition, explore all facets of human life through a human lens. If the focus shifts predominantly to imagination, divorcing it from the arduous process of execution, it naturally follows that disciplines dedicated to the study of human struggle, emotion, and tangible output might be seen as less vital.
The Enlightenment’s Long Shadow on AI Fanaticism
This evolving narrative within the AI community, some argue, is not an accidental byproduct but rather an inevitable outcome of a deeper philosophical trajectory rooted in the Enlightenment. This historical period, characterized by an unprecedented elevation of human reason, laid the groundwork for a belief in continuous progress and the eventual transcendence of human limitations. Philosophers like Auguste Comte and Henri de Saint-Simon explicitly envisioned a scientific utopia where experts and technocrats would guide humanity, essentially replacing traditional religious figures.
Post-Enlightenment thought, whether manifest in political movements or technological aspirations, has often clung to this quasi-religious idea of humanity progressing towards a scientific and rational utopia. The ultimate goal, implicitly or explicitly, has often been a “post-human” state—human reason disembodied and transcended. From this vantage point, the fervent pursuit of AI by today’s vanguard elites, including Huang, Su, and futurists like Ray Kurzweil, can be seen as a continuation of this centuries-old post-humanist, scientific-utopian vision. The critical difference now is the advent of seemingly genuine technological means to achieve this previously theoretical anti-human future.
Beyond Regulation: A Shift in “Lived Understanding”
The irony is profound: this potentially anti-human future is invariably marketed as a benefit to humanity. Yet, the current discourse, where human creativity is reduced to mere imagination and universities divest from humanities, reveals the underlying truth. Addressing this challenge requires more than just superficial AI regulations. The problem runs deeper, touching upon how individuals perceive themselves and their place in the world—a concept philosopher Charles Taylor terms “lived understanding.”
For many AI proponents, a form of technological determinism prevails: the “post-human” era is inevitable, and humanity must adapt or be left behind. Huang’s blunt assertion, “If you’re not using AI, you’re going to lose your job to somebody who uses AI,” encapsulates this mindset. It suggests a future dictated by technology, where human agency is secondary.
Challenging this Enlightenment-derived faith in the post-human necessitates a fundamental shift in our collective “lived understanding.” It requires asking profound questions: What truly defines being human? When does defending intrinsic human qualities supersede technological convenience or improved quality of life? What genuinely contributes to human well-being and flourishing?
Re-Enchanting the Human Experience
Part of the solution may lie in “re-enchanting the human.” This involves moving beyond a view of ourselves as mere active subjects, creators of all meaning, and instead opening ourselves to the inherent meaning already present in things and activities. Engaging in activities that involve genuine creation—not just imagining or prompting—becomes paramount. For instance, treating activities like gaming as a craft, appreciating the human creativity poured into their development, can foster a deeper connection to shared human experience.
Philosophers Hubert Dreyfus and Sean Kelly argue that this “poietic” (craftsman-like) way of being is threatened by our technological age. The underlying attitude of AI fanaticism, which prioritizes technological output over human process, exemplifies this threat. By recognizing the value of actual human creation for its own sake, not merely for its utility or what it can provide, we begin to challenge the prevailing paradigm. The path forward demands a fundamental re-evaluation, moving beyond frameworks that assume an unstoppable march towards a scientific or technological utopia, and instead, re-centering the profound and irreplaceable value of human existence.