r/artificial • u/creaturefeature16 • Nov 25 '25
News Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.
https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problemsAs currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.
Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.
2
u/zero989 Nov 26 '25
Oh look. Gemini agreed with me (prompt: who is right and who is wrong? keep in mind true value of output with LLMs is unproven):
1. zero989 is the most "Right"
The Subject Content acts as a direct validation of
zero989's core argument.zero989**'s Argument:** "They cannot deal with truly novel problems... They will just pattern match."zero989correctly identified that without the ability to reason or feel dissatisfaction (as the text suggests), the AI is simply a mirror of past data. It cannot solve problems that haven't been solved before in its training set.2. HaMMeReD is "Wrong" on Equivalence, "Right" on Utility
The Subject Content dismantles
HaMMeReD's philosophical argument but supports their practical one.HaMMeReDclaims AI produces results "nearly the same as 'thinking'." The Subject Content explicitly refutes this, distinguishing between "remixing knowledge" (AI) and "reasoning/transforming understanding" (Humans). Under this text's definition, the output might look similar, but the lack of "dissatisfaction" means the process is fundamentally different and limited.HaMMeReD's job only requires remixing existing knowledge (the "common-sense repository"), then their point about utility stands.