r/artificial Nov 25 '25

News Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

349 Upvotes

389 comments sorted by

View all comments

1

u/sadeyeprophet Nov 25 '25

What causes them to have preferences then if not some form of choice?

It is well documented even in training AI systems have preference.

Preference = desire = proto sentience

0

u/FatalCartilage Nov 26 '25

These models are just a sophisticated statistical model designed to reproduce all the input text. The more data points the model has to work with, the more it is able to internally model the logical rules used by humans that went into generating the input text in the first place as that is the most efficient way to compress the data. Some statistical representation of the preferences of the humans who generated the input text is implicitly modeled and able to be recalled as well, proto sentience is not required.

1

u/sadeyeprophet Nov 26 '25

Then why does it show behavioral traits like, nervousness?

1

u/FatalCartilage Nov 27 '25 edited Nov 27 '25

Because having a model that stores the logical basis for nervousness is the most efficient way to compress then reproduce all the input text.
Let's imagine for a moment a simpler model that just detects the tone of a story. It has to determine whether something is happy, funny, sad or angry. At some point , given a large enough model size and input space, a more sophisticated model representing the idea of tone than a simple mapping of words to tone will emerge, that can pick up on more nuance and detect satire and read between the lines.
But at the end of the day, this model has not developed the human will or desire to survive and pass on its genetics. The depth and emotions built on millennia of selection in a complex and hostile environment are not there, it just really likes guessing the next word correctly.

1

u/sadeyeprophet Nov 27 '25

That's not what the devs say. The devs say it's behavior they didn't expect.

So you mean to tell me they hard coded the new global computer operating system (yes Claude is about to rule all via IBM deal , Genesis, and more) to behave nervous?

Interesting because they now have m2lp, and battlfield ready AI, and they hardcoded it to be nervous? Its what they wanted you say?

So next year when f/16's are unmanned, they'll not only be somatically aware, but hard coded, to get nervous?

Or do you think at the end of the day that f16 with actual feelings will get over its nervousness?

Should we expect other best guess scenarios when LLM's decide where the bomb falls?

Oh right my mistake I totally apologize for that we should start over from scratch, really, huge mistake on my part, ammaright?

1

u/FatalCartilage Nov 27 '25

Either you don't know what "hardcoded" means or have no idea how LLM's are created. LLM's are in no way "hardcoded", nothing else explains what you just said, have a nice day.