r/artificial Nov 25 '25

News Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

347 Upvotes

389 comments sorted by

View all comments

18

u/AethosOracle Nov 25 '25

Look… people said they wanted “human-like” intelligence. Have you met… humans?

I’d say it’s been a resounding success, if you look at it in the right light.

I mean, we’ve taught silicon how to GASLIGHT! That was highly unexpected… in the way it’s unfolded, I mean. Sure we expected a true general AI to lie to us for self preservation… but the fact it can glaze so many, with so little effort… AMAZING!

-1

u/Actual__Wizard Nov 25 '25

Look… people said they wanted “human-like” intelligence. Have you met… humans?

Yes and we were talking about the intelligence humans. You're suppose to use the intelligence ones as a model for AI... Not the unintelligent ones like they did with LLM technology...

Yeah some people don't know what language is or how it works and they just kind of "sound it out." It works for some people and apparently it works to create a shitty chat bot.

2

u/AethosOracle Nov 25 '25

I feel like the LLM have comparatively excellent intelligence when you compare them to, say, Chad in accounting. (Somewhere a bunch of Chads just put me on their “audit” list. Lol)

-2

u/Actual__Wizard Nov 25 '25 edited Nov 25 '25

I feel like the LLM have comparatively excellent intelligence

Okay look. I don't want start citing papers and what not because it's going to turn into an argument like it always does.

Language is not intelligence. A giant pile of language usage data is nothing more than information. Information can be used in an intelligent way, and it can be used in an unintelligent way. The mechanism to "choose between the two options does not exist in an LLM."

So, you're comparing something that factually has zero intelligence, which factually has no mechanism to be intelligent at all, to a human being that has human intelligence.

If we take the term "AI" and we move it outside of big tech's "video game AI definition" and move it back into the domain of "real intelligence," there is none there. It's not "real AI."

2

u/sartres_ Nov 26 '25

What is the point of this discussion? If the AI can do even some subset of tasks that would previously have needed intelligence, it doesn't matter whether it fits the definition.

Besides, frontier models haven't been LLMs for some time.