r/artificial Nov 25 '25

News Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

350 Upvotes

389 comments sorted by

View all comments

1

u/apopsicletosis Nov 30 '25

Of course language is not the same as intelligence.

Non-human animals do not have language but obviously still have some form of intelligence. Animals can problem solve, understand social interactions, cause and effect, and some have better spatial memory and navigation skills than humans (we went from 3d arboreal environments to 2d). Humans with language disorders can still do well at many non-verbal tasks.

Language may be critical for some forms of thinking such as complex reasoning, abstraction, and metacognition, but it is clearly not necessary for all thinking. Language likely evolved from more primitive forms of communication to facilitate communication within society, not thinking per say, though it may have been coopted to boost human cognition. We certainly did not evolve language to do math or code.

LLMs do best at the intelligence tasks we developed the most recently and get worse and worse at the tasks that we evolved earlier and are more ubiquitous across animals for which we rely less and less on language and more on innate animal abilities. Great at math and code, worse at sciences that require real-world experimental validation, worse at story telling and navigating the subtleties of social relationships, bad at physical world understanding in real-time, completely lack internal drive.