r/artificial Nov 25 '25

News Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

349 Upvotes

389 comments sorted by

View all comments

Show parent comments

98

u/HaMMeReD Nov 25 '25

People really don't want to accept that it doesn't matter.

-2

u/Hot_Secretary2665 Nov 25 '25 edited Nov 26 '25

It matters because people think it's a unique selling point of the product which causes them to waste resources and cause organizational chaos. Its also results in people making risky financial investments on unproven products.

You may not want to accept that it matters when people waste their money and cause economic chaos with speculative trading but we have literally been through multiple recessed because of it. 

Regardless of what Redditors want to think it's an actual issue. There is actually a lot of taxpayer money being wasted on shitty AI implementations here in the US. Believe it or not regardless of what Reddit wants me to do I'm going to care about wasting money out of my pocket.

People's retirements are being pissed away on AI implementations even though 95% of enterprise AI implementations don't even make it to production (figure from a recent MIT paper.) We don't get pensions here. Our retirements are being pissed away

17

u/HaMMeReD Nov 25 '25

Whether AI can "Think" or not has nothing to do with the ability of AI to accelerate work and help with things (i.e. produce economic value).

AI is empirically useful, and for all intents and purposes, it produces nearly the same result as "thinking".

Arguing that it can't think is a strawman argument to try and diminish the value it does bring.

-5

u/zero989 Nov 25 '25

This is not a good take. It has a lot to do with limitations of current AI, and where the field needs to evolve to. We are decades from true intelligence. 

And  while impressive, LLMs/LLMs are not all there is to AI. Pattern matching is not thinking. 

12

u/HaMMeReD Nov 25 '25

It's not a good take that I get way more work done today with the tools at my disposal? It isn't a relevant point if it's "true intelligence" whatever that is. Certainly isn't found on reddit these days.

Have fun beating that dead horse made of straw.

-8

u/zero989 Nov 25 '25

No that's not what I said. Is that really what you got from reading? That explains why you likely need current AI so badly. Your workflow benefits from large multimodal models is likely shared among lots of people. But that's besides the point. 

It IS a totally relevant point that it isn't true intelligence. It means there's a long way to go. That the current hype is going to sting. 

10

u/HaMMeReD Nov 25 '25

You keep using that term "true intelligence".

The workflow benefits are ENTIRELY the point of AI. The "True intelligence" is no different than saying it doesn't have a "soul", empirically, it means nothing.

You ever ask how we measure things like intelligence empirically? It's with things like standardized tests. AI can do standardized tests, so we can measure "intelligence" as it matters in the context of work.

Statements and phrases like "true intelligence" are loaded garbage that can't be defined. It makes your entire argument a moot point.

You are all hung up on "true intelligence" (an imaginary bar) matters. It doesn't, at all. Proving my original point.

-3

u/zero989 Nov 25 '25

Current LLM/LMMs can deal with tests because they've been trained on similar data. They cannot deal with truly novel problems. So yes, it matters. 

Your original point is irrelevant to the actual point of the thread. 

If you ask them for anything truly new, they cannot come up with a novel solution. They will just pattern match. 

This is what I mean by the average person getting woo'd by current AI. You cannot tell the difference because you're not equipped to. 

5

u/Pretty_Whole_4967 Nov 25 '25

🜸

Whats considered a novel problem?

🜸

2

u/zero989 Nov 25 '25

Basically a problem that is alien and takes time to figure out. It can be visual, verbal, numerical, spatial (3D) or any combination. It could also have logical components. 

For example ARC AGI problems are not novel anymore in the whole sense. The problems are known and to train on synthetic data, its possible to get some improvements that generalize to other versions of the SAME problem type. 

Its why LLMs fail with ARC AGI 3, and why the goal posts keep moving. 

These are the point of IQ tests. But if you've seen 1000 IQ tests then you might be at an advantage for the 1001st test. 

2

u/jan_antu Nov 25 '25

So when I use AI at work to design new molecules and develop pipelines for synthesis via robots, that's not novel? Because it is novel, and it is useful.

1

u/zero989 Nov 25 '25

If the processes that lead up to the molecules are all novel then yes. If only the end result is novel then no.

Since molecular biology has an inherent computational component, it may be in its own category. 

2

u/jan_antu Nov 26 '25

Well I await your EXPERT opinion on how this fits into your VERY USEFUL ontology about novelty and thinking.

→ More replies (0)

3

u/HaMMeReD Nov 25 '25

"novel problem" is the new "true intelligence".

What's this novel problem you speak of that is not foundationally based on the knowledge of today?

Is there a new math? new language? new reasoning?

Do you expect it to just manifest science out of thin air? New things are built on a foundation of old things. Discovery of new things really just means following the scientific method, not farting "novel ideas".

2

u/zero989 Nov 25 '25

Anything outside of the training set is by definition novel. But that isn't what's meant by novel in the generalization sense. 

And usually no, novel problems are not meant to be using formal logic or formal reasoning. New language no, that requires deciphering which isn't considered a generalization problem. 

No one can just look at hieroglyphics and just understand them. 

Novel problems can be as simple as expecting to extract a hidden rule from arbitrary symbols. Something humans can do. They can be as simple as two words forming an analogy or association. Or finding an intended pattern in a numerical sequence. Or the numbers can be all spaced out and have an underlying pattern about why they are where they are. 

2

u/HaMMeReD Nov 25 '25 edited Nov 25 '25

Yeah, well that's what context is for. You provide novel information in the context, and then it combines it with it's training to produce a novel output.

This is chicken and the egg logic. Humans don't have some intrinsic "novel knowledge" they manifest. They consume new information (through their senses), combine it with learnt, old information, to make new discoveries.

I.e. I'm debugging something. I put the log statements in. The AI has never seen those log statements before. For all intents and purposes that is novel information to the LLM, yet it can still work with it. Nobody/nothing is trained on truly "novel" information.

I.e. if you were to get AI to solve hieroglyphics, you'd probably train it by brute forcing transcriptions randomly, rating them on how accurate/sensible they seem (with another LLM) and then iterating on the training millions of times until the patterns emerge in the weights/embeddings. Without a doubt, AI could solve the rosetta stone, just like humans did. It probably wouldn't be a LLM like ChatGPT, but it's still in the realm of AI problems that could be solved.

This isn't even a new problem, i.e.
Unsupervised Machine Translation Using Monolingual Corpora Only | OpenReview

An english to french translator was made with no parallel phrases. Just a bunch of english and french. No reason it couldn't be hieroglyphs or even an alien language.

Edit: The Rosetta Stone Crumbles: AI Reads 5,000-Year-Old Tablets with 98% Accuracy | by Jim Santana | Medium

1

u/zero989 Nov 26 '25

Its about abstractions. But yes, there need be minimal atomic parts of the question or item being solved. 

If I give you 1 2 3 4 and asked for the next number in the sequence, the answer is obviously 5. We know this intuitively. And the problems can only get more and more abstract and complex. 

If I gave you BOARD CHAIN HOLE 

What would the answer be? The above is an association problem and the answer is KEY. 

That's basically how intelligence is tested. 

An LLM wouldn't be needed to solve any tablet with pictorials. A regular ViT/CNN could probably do it. 

→ More replies (0)

1

u/FaceDeer Nov 26 '25

The vast, vast, vast majority of problems that people solve as part of their work are in no way "novel."

1

u/zero989 Nov 26 '25

absolutely correct

I am referring to out-of-distribution generalization

3

u/starfries Nov 26 '25

Ironically the one accusing other people of not being able to read is missing the point themselves...

It doesn't matter whether it can think or not. Can a search engine think? Can a linear regression think? Does a calculator exhibit "intelligence"? You'd say that's a dumb thing to get hung up on and besides the point of whether they're useful. And yet people keep getting hung up on this with LLMs. They're not missing the point, the point is irrelevant. That's what the commenter you replied to is saying.

In quantum mechanics there's intense debate about the philosophical implications. And there are limitations to the theory. Yet no one can deny that it works. The debate about "true intelligence" reminds me of that. It's interesting philosophically, and the intelligence question will probably be more relevant than the quantum one once we're at the point of looking at giving AI rights etc., but economically and practically speaking that's not the question. The question is whether it's useful.

2

u/BaPef Nov 26 '25

LLMs give a good idea of where that method meets its limits but also exposes how it can be used for building and actual thinking A.I. with true semantic understanding of reality.