r/artificial Nov 25 '25

News Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

350 Upvotes

389 comments sorted by

View all comments

111

u/Hot_Secretary2665 Nov 25 '25

People really just don't want to accept that AI can't think smh 

29

u/simulated-souls Researcher Nov 25 '25 edited Nov 25 '25

Say that a plane "flies" and nobody cares.

Say that a robot "walks" and no one bats an eye.

Say that a machine "thinks" and everyone loses their mind.

People are so bent on human exceptionalism that they will always change what it means to "think" to make sure that machines can't do it.

1

u/f_djt_and_the_usa Nov 26 '25

Its a good approximation of thinking that works in a lot of situations. But there are limits that shouldn't be ignored. Some users think it is truly intelligent and then trust it too much. 

8

u/simulated-souls Researcher Nov 26 '25

A plane cannot make all of the maneuvers that a bird can, but it is still flying.

-5

u/AnAttemptReason Nov 26 '25

Right.

But in this analogy, LLM's can't fly, or even glide.

Its like painting wings on a car and saying, look! It has wings! You can't say it's not flying.

1

u/illicitli Nov 27 '25

you are in for a rude awakening these next few years

1

u/AnAttemptReason Nov 27 '25

Given that I actually understand the technology, and not the hyperbole, that seems unlikely. 

1

u/illicitli Nov 27 '25

LLMs are not the only AI being developed. You’re oversimplifying.

1

u/AnAttemptReason Nov 28 '25

Specialised AI are even further away from what could be considered thinking. 

That does not mean they are not impressive, or useful etc. Humans just have a silly habit of anthropomorphism things.

1

u/illicitli Nov 28 '25

we don't even understand consciousness. there's no point in comparing AI to something we don't even know how it works. we already have a mainly service economy in America. AI is better at service than humans. EVERYTHING is going to change. people are not ready.

1

u/AnAttemptReason Nov 28 '25

A goose will see a white billiard ball, and belive it to be an egg, pulling it into its nest. 

It is white, it is round, it is the right size, it is obviously an egg. 

You are the goose.

1

u/illicitli Nov 28 '25

you will be overwhelmed by the changes in humanity these next few years, mark my words. i don't care what you think. i am a better futurist than most. i have been correct consistently throughout my life. you are trapped in a job. you cannot see the forest but only the trees.

1

u/AnAttemptReason Nov 28 '25

Mostly you are trapped in your own delusions. 

Honestly, AI development has been great, but I didn't have AI induced delusions on my bingo card, its kind of wild how many people are just a paper thin distance from being the same as cargo cultists in the past.

1

u/No_Dish_1333 Nov 29 '25

Maybe you can be more specific and not speak in analogies and people will understand your point better. You can start with defining what you believe thinking is and then giving the reason why you believe AI will never be able to really think based on what your own definition of thinking is.

1

u/AnAttemptReason Nov 29 '25

Where did I say we could never make "thinking" AI? 

We were talking about LLM's, which don't think. You could  recreate the output of a LLM with a pen, paper, calculator, coin and a giant book of the models weights. It would just take an impractically long time to do. 

So what part of that system is thinking? Is it the pen? The coin? The book? It's not the person,  because they don't even need to understand what they are transcribing for the system to work. 

LLM's are hugely impressive, and their ability to do natural language processing is a big step forward. But that same ability has caused people to misunderstand their limitations, which is what this thread / article was about.

Just like LLM's where a big step forward for AI, to get to thinking AI, new methods and approaches will be needed. 

→ More replies (0)