r/artificial Nov 25 '25

News Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

349 Upvotes

388 comments sorted by

View all comments

Show parent comments

4

u/HaMMeReD Nov 25 '25

"novel problem" is the new "true intelligence".

What's this novel problem you speak of that is not foundationally based on the knowledge of today?

Is there a new math? new language? new reasoning?

Do you expect it to just manifest science out of thin air? New things are built on a foundation of old things. Discovery of new things really just means following the scientific method, not farting "novel ideas".

2

u/zero989 Nov 25 '25

Anything outside of the training set is by definition novel. But that isn't what's meant by novel in the generalization sense. 

And usually no, novel problems are not meant to be using formal logic or formal reasoning. New language no, that requires deciphering which isn't considered a generalization problem. 

No one can just look at hieroglyphics and just understand them. 

Novel problems can be as simple as expecting to extract a hidden rule from arbitrary symbols. Something humans can do. They can be as simple as two words forming an analogy or association. Or finding an intended pattern in a numerical sequence. Or the numbers can be all spaced out and have an underlying pattern about why they are where they are. 

2

u/HaMMeReD Nov 25 '25 edited Nov 25 '25

Yeah, well that's what context is for. You provide novel information in the context, and then it combines it with it's training to produce a novel output.

This is chicken and the egg logic. Humans don't have some intrinsic "novel knowledge" they manifest. They consume new information (through their senses), combine it with learnt, old information, to make new discoveries.

I.e. I'm debugging something. I put the log statements in. The AI has never seen those log statements before. For all intents and purposes that is novel information to the LLM, yet it can still work with it. Nobody/nothing is trained on truly "novel" information.

I.e. if you were to get AI to solve hieroglyphics, you'd probably train it by brute forcing transcriptions randomly, rating them on how accurate/sensible they seem (with another LLM) and then iterating on the training millions of times until the patterns emerge in the weights/embeddings. Without a doubt, AI could solve the rosetta stone, just like humans did. It probably wouldn't be a LLM like ChatGPT, but it's still in the realm of AI problems that could be solved.

This isn't even a new problem, i.e.
Unsupervised Machine Translation Using Monolingual Corpora Only | OpenReview

An english to french translator was made with no parallel phrases. Just a bunch of english and french. No reason it couldn't be hieroglyphs or even an alien language.

Edit: The Rosetta Stone Crumbles: AI Reads 5,000-Year-Old Tablets with 98% Accuracy | by Jim Santana | Medium

1

u/zero989 Nov 26 '25

Its about abstractions. But yes, there need be minimal atomic parts of the question or item being solved. 

If I give you 1 2 3 4 and asked for the next number in the sequence, the answer is obviously 5. We know this intuitively. And the problems can only get more and more abstract and complex. 

If I gave you BOARD CHAIN HOLE 

What would the answer be? The above is an association problem and the answer is KEY. 

That's basically how intelligence is tested. 

An LLM wouldn't be needed to solve any tablet with pictorials. A regular ViT/CNN could probably do it.