AI’s Core Flaw: New Research Suggests Language Models Don’t Equate to True Intelligence

12645

Prominent figures in the tech world are making extraordinary claims about the imminent arrival of artificial superintelligence. Mark Zuckerberg envisions the “creation and discovery of new things that aren’t imaginable today,” while Dario Amodei boldly predicts “smarter than a Nobel Prize winner across most relevant fields” by 2026, even hinting at “doubling of human lifespans” or “escape velocity” from death. Sam Altman, confident in building Artificial General Intelligence (AGI), suggests superintelligent AI could “massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own.”

But should these bold declarations be taken at face value? A closer examination of the science behind human intelligence, coupled with the actual capabilities of current AI systems, suggests a need for caution.

The fundamental technology powering today’s most recognized chatbots – including OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and Meta’s various AI offerings – are primarily large language models (LLMs). These sophisticated systems operate by processing immense volumes of linguistic data, identifying intricate correlations between words (or ‘tokens’), and subsequently predicting the most probable linguistic output given a specific input prompt. Despite the alleged complexity of generative AI, at their core, they remain highly advanced models of language.

Herein lies a critical disconnect: according to contemporary neuroscience, human thinking processes are largely independent of human language. This emerging understanding challenges the underlying assumption prevalent in much of the AI industry. If human intelligence is not merely a function of linguistic prowess, then there is little scientific basis to conclude that simply developing ever-more-sophisticated models of language will inevitably lead to a form of intelligence that truly meets or surpasses our own.

This raises profound questions about the very foundation of the current ‘AI bubble,’ which appears to be built, arguably, on a significant oversight: the assumption that mastering language unequivocally equates to mastering intelligence itself.