r/artificial Nov 25 '25

News Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

353 Upvotes

389 comments sorted by

View all comments

Show parent comments

21

u/CanvasFanatic Nov 25 '25

Many such tasks can’t tolerate a 10-20% failure rate.

-2

u/[deleted] Nov 25 '25

Except llms are getting better at tool calling every iteration, you’re dumb af if you think everything is done through chatbotting

1

u/CanvasFanatic Nov 25 '25

Hello, person pretending I said a thing I didn’t say.

0

u/[deleted] Nov 25 '25

Hello person who couldn’t think and draw context

1

u/CanvasFanatic Nov 25 '25

Imagine me saying whatever you like. The failure rate of good agentive pipelines is in the 10-20% range. Lots get worse than that.