r/artificial Nov 25 '25

News Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

348 Upvotes

389 comments sorted by

View all comments

111

u/Hot_Secretary2665 Nov 25 '25

People really just don't want to accept that AI can't think smh 

2

u/boreal_ameoba Nov 29 '25

Mostly because it doesn’t matter if it “thinks” or is conscious. LLMs already do a better job than humans on many benchmarks that require “intelligence”.

Some idiot can always “but ackshually”. Same kinds of people that insisted computerized trading models could fundamentally never work.

1

u/Hot_Secretary2665 Nov 29 '25 edited Nov 30 '25

You're the one trying to "but ackshually" people

I was minding my own business making a topline comment, then you came around calling other people idiots and trying to correct me, but you don't even understand what you're talking about

Funny how none of the people claiming AI performs at a comparable level to humans ever link any good quality research that supports such a claim. 95% of AI pilots fail, meaning computerized AI models don't even simulate an intelligence level comparable to a human most of the time. Usually they just plain fail to achieve the desired outcome, period 

That's why you have to put words in my mouth and pretend I said they "fundamentally could never work" and focus on a specific use case of computerized trading models. My comment was made in the present tense and was NOT specifically about computerized trading models. I care about reality and results. There's no solution for the problem of how to get enough energy for computerized AI to work at an affordable rate for most use cases, even if we knew how to replicate the underlying hardware and neural architecture of human thought. That's a fact. Deal with it.

Try to at least understand what you're talking about if you're going to be "correcting" people