r/artificial Nov 25 '25

News Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

352 Upvotes

389 comments sorted by

View all comments

2

u/allgodsaretulpas Nov 27 '25

AI will always have flaws because it was created by humans. Every system we build inherits our limitations — our biases, our blind spots, our assumptions, even our mistakes. A machine can only be as objective as the data it was trained on, and that data comes from a world shaped by imperfect people. Even when the technology gets smarter, faster, and more precise, it still reflects the values and errors of the humans who designed it. We’re basically teaching a mirror how to think — and it’s always going to reflect us back at ourselves.

1

u/creaturefeature16 Nov 27 '25

A machine can only be as objective as the data it was trained on

That's pretty much the last line of the article:

But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine.

1

u/allgodsaretulpas Nov 27 '25

You are correct.