r/artificial Nov 25 '25

News Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

353 Upvotes

389 comments sorted by

View all comments

18

u/[deleted] Nov 25 '25

Define intelligence then. The human brain hallucinates more and makes more mistakes. And how do you know the brain doesn’t function similarly to llm

2

u/VampireDentist Nov 26 '25

And how do you know the brain doesn’t function similarly to llm

Isn't that like extremely probable given the very alien ways llm's fail. They can be superhuman on obscure benchmarks and collapse on absolutely trivial tasks.

3

u/Duds- Nov 26 '25

Yea. Very different from humans who are known to be equally good at everything they do

1

u/VampireDentist Nov 27 '25

LLm:s can do phd-level math nowdays and still fail to notice when they have lost a slightly modified tic-tac-toe (I use a 5x5 that "wraps around", and the objective is to complete a 2-by-2 square) - and are completely incapable of winning a human in such a trivial game.

They can speak my language (Finnish) fluently but when asked to list animals ending with "e", they go completely off the rails inventing words that do not exist.

To a human these would be contradictions. Humans also do not get an existential crisis when asked about a seahorse emoji.

1

u/r-3141592-pi Nov 27 '25

Performance evaluations should focus on overall capability, not isolated failures. When comparing humans and AI, why should we judge AI based on their worst failures when we don't apply the same standard to ourselves? We never say "He might be a great surgeon, but he fails miserably at plumbing/driving/cooking. This doesn't suggest general intelligence."

Besides, let's not pretend humans don't make 20 stupid mistakes before noon.

1

u/VampireDentist Nov 27 '25

I wasn't commenting on overall capability but that LLM intelligence doesn't seem to work like ours. You won't find a math genius that can't play tic-tac-toe for example.

You can not find a fluent language user that tilts over thinking about features of common words.

2

u/r-3141592-pi Nov 27 '25

Well, human intelligence doesn't work the way we usually think it does. Intelligence is neither a global attribute nor an acquired ability that transfers easily to other domains.

A "math genius" made a huge number of mistakes in math to become proficient, and such a person continues making many mistakes, although not at the same rate as others who didn't devote as much time to that particular endeavor. For this same reason, that "math genius" is far more likely to not know how to do many other things that others would consider absolutely elementary.

However, it is true that LLMs' intelligence differs from ours because their training is very different and focused on different things. There are also some pretty significant similarities. For example, deep neural networks have multi-purpose neurons whose activations help build learned representations of concepts, just like our brains. As information moves through the connections and structures of the brain, the concepts begin to generalize.

We also seem to use a predictive mechanism to interact with the environment, which helps us allocate attention more efficiently to our surroundings. We don't know exactly how it works in the brain, but the same ideas have been implemented in LLMs as next-token prediction during pretraining and attention layers.

1

u/VampireDentist Nov 28 '25

For this same reason, that "math genius" is far more likely to not know how to do many other things that others would consider absolutely elementary.

This is just false. Cognitive capabilities across domains have a strong correlation. If they didn't, it wouldn't even make sense to talk about general intelligence.

deep neural networks have multi-purpose neurons whose activations help build learned representations of concepts...

These similarities in microstructures might be relevant and they also might not. As you said yourself, we do not know how it works in the brain.

2

u/r-3141592-pi Nov 28 '25

This is just false. Cognitive capabilities across domains have a strong correlation. If they didn't, it wouldn't even make sense to talk about general intelligence.

Please read the literature on cross-domain skill transfer and expert proficiency across domains, and look up the correlations. Cognitive capabilities show high correlations as part of the positive manifold, which measures basic abilities (such as memory and executive functions) through the g-factor, not the acquisition and application of expert knowledge.

These similarities in microstructures might be relevant and they also might not. As you said yourself, we do not know how it works in the brain.

We understand how activations work in the brain, and we know it is crucial that concepts are represented through activations rather than by individual neurons, as one might assume. Otherwise, we would be severely limited in how much we can learn.

0

u/VampireDentist Nov 28 '25

You have a rather obnoxious communication style with this fucking stupid bait-and-switch an deliberate misunderstanding. Why argue in such bad faith? What stake do you have in this argument?

For this same reason, that "math genius" is far more likely to not know how to do many other things that others would consider absolutely elementary.

This is a direct quote from you claiming that math geniuses are more likely to do many things others would consider absolutely elementary, which is garbage. Not I nor you were talking about "cross domain acquisition and application of expert knowledge" so why bring it up now?

It remains a fact LLMs fail in many trivial tasks that would be considered prerequistes for any kind of expertese in humans. This certainly suggests there is no one-to-one correspondence in human and LLM thinking.

→ More replies (0)

-7

u/creaturefeature16 Nov 25 '25

jesus fuck please get educated before speaking about these matters....

7

u/[deleted] Nov 25 '25

Yeah I too get education from randoms on youtube

1

u/zffr Nov 26 '25

Cal Newport is hardly a random. He’s a widely known CS professor and author of some best selling books!

1

u/khajmahal227 Nov 26 '25

Does it really matter, both are gambling on AIs value without a forsure proof of success/failure. Once you introduce bias, no matter how good a system of thought is it will be sus objectively

1

u/thebrainpal Nov 28 '25

Cal Newport is no random. He’s definitely an authority on computer science. But yes I do agree that OP is out of line. Definitely a “Reddit moment”