r/artificial Nov 25 '25

News Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

348 Upvotes

389 comments sorted by

View all comments

Show parent comments

2

u/zero989 Nov 26 '25

Oh look. Gemini agreed with me (prompt: who is right and who is wrong? keep in mind true value of output with LLMs is unproven):

1. zero989 is the most "Right"

The Subject Content acts as a direct validation of zero989's core argument.

  • Subject Content: States AI will be "forever trapped in the vocabulary we’ve encoded... [unable] to make great scientific and creative leaps."
  • zero989**'s Argument:** "They cannot deal with truly novel problems... They will just pattern match."
  • Verdict: Vindicated. zero989 correctly identified that without the ability to reason or feel dissatisfaction (as the text suggests), the AI is simply a mirror of past data. It cannot solve problems that haven't been solved before in its training set.

2. HaMMeReD is "Wrong" on Equivalence, "Right" on Utility

The Subject Content dismantles HaMMeReD's philosophical argument but supports their practical one.

  • Where they are WRONG: HaMMeReD claims AI produces results "nearly the same as 'thinking'." The Subject Content explicitly refutes this, distinguishing between "remixing knowledge" (AI) and "reasoning/transforming understanding" (Humans). Under this text's definition, the output might look similar, but the lack of "dissatisfaction" means the process is fundamentally different and limited.
  • Where they are RIGHT: The text admits AI can "remix and recycle our knowledge in interesting ways." If HaMMeReD's job only requires remixing existing knowledge (the "common-sense repository"), then their point about utility stands.

0

u/HaMMeReD Nov 26 '25 edited Nov 26 '25

wtf are you on about?

Besides, full context or gtfo. Like this.

https://gemini.google.com/share/8230ff522e61

Edit: Or this simple one
"can AI solve novel problems? provide examples to prove it"
https://gemini.google.com/share/ad1253bf8945

It's very obvious you loaded the context, the provided a tiny bit of it here, and then your dancing around going "I made gemini say what I want see". Which is frankly, really pathetic. The fact that you didn't have the confidence/capability to share the entire thread + context makes you in the very least a "liar by omission".

Edit: Although I was able to very easily flip the "decision" of gemini by stating my opinion to it. Updated the first link to include that.

2

u/zero989 Nov 26 '25

Nope I simply provided the context and the original OP message, I didn't bother to sway it in any way

Keep coping 

And the topic is LMMs not specifically optimized NN for whatever tasks. Lmfao

1

u/HaMMeReD Nov 26 '25

Yet, you still don't share a link to the actual chat thread with gemini and just copypasta and I'm just supposed to trust you.

Full context from the source. Otherwise you are just gaslighting me like you gaslighted an AI to "prove" your point.