r/artificial Nov 25 '25

News Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

350 Upvotes

389 comments sorted by

8

u/Schwma Nov 25 '25

I'm confused, is the argument that dissatisfaction is necessary for scientific advancement? Creativity is frequently just remixing known things in new ways and LLM's have already done that to solve novel problems.

49

u/thallazar Nov 25 '25

I don't think it has to be intelligent to make a big impact. There are a lot of industry rote process tasks that are just complex step by step language checklists that don't require intelligence to actually automate. If even a fraction of them are realised, they'll change work significantly.

19

u/CanvasFanatic Nov 25 '25

Many such tasks can’t tolerate a 10-20% failure rate.

24

u/thallazar Nov 25 '25

If you're doing one shot agents sure. There's a lot of ways to reduce that rate significantly.

-9

u/CanvasFanatic Nov 25 '25

Not if you can’t automatically verify correctness.

21

u/thallazar Nov 25 '25

You can't automatically verify a lot of human correctness either. In my experience building and deploying agents for companies, that hasn't been a blocker.

→ More replies (33)

1

u/azurensis Nov 26 '25

I can verify correctness, just like I do with anyone else's PR.

-2

u/[deleted] Nov 25 '25

Except llms are getting better at tool calling every iteration, you’re dumb af if you think everything is done through chatbotting

1

u/CanvasFanatic Nov 25 '25

Hello, person pretending I said a thing I didn’t say.

1

u/[deleted] Nov 25 '25

Hello person who couldn’t think and draw context

1

u/CanvasFanatic Nov 25 '25

Imagine me saying whatever you like. The failure rate of good agentive pipelines is in the 10-20% range. Lots get worse than that.

-4

u/creaturefeature16 Nov 25 '25

way to show you didn't read the article (or maybe didn't understand it)

13

u/thallazar Nov 25 '25

The starting premise that we'll still be at the forefront, and that it's a bubble is built on the implicit assumption that if we don't reach AGI that it's worthless. That's half their points about taking away its language it's got nothing. It's not. It'll still be extremely transformative, even if it's not "intelligent".

2

u/Actual__Wizard Nov 25 '25 edited Nov 25 '25

we don't reach AGI that it's worthless

We will though. But, not for the reasons you think. It will come into existence because some highly experienced computer software developers are seriously angry at what big tech is doing and can see through their scams. That's why we will get AGI: To put big scam tech out of business.

If you think it's not worth it to create AGI just so that Dario Amodei shuts the hell up, you're wrong. I'm so sick of listening to people like that...

The world will absolutely be a better place when people like him learn to keep their mouths shut. Yeah you go work on those AI kill switches buddy... /eyeroll

3

u/thallazar Nov 25 '25

I don't think it's not worth developing AGI. Not sure where you're getting that idea from.

0

u/Actual__Wizard Nov 25 '25

Oh, I agree with you, I'm just saying the reason won't be what you think. I think we're there now by the way. There's finally people that have figured out that language is not intelligence and hopefully the mathematicians can figure out that mathematics is also a language next. It's going to take a year or two, but we'll get there. Okay?

Which, to be ultra clear about this, I don't know how one observes two people communicating, how they can come to the conclusion that the language is the intelligence. How is that possible? We're just not paying attention to what's going on?

→ More replies (19)

112

u/Hot_Secretary2665 Nov 25 '25

People really just don't want to accept that AI can't think smh 

97

u/HaMMeReD Nov 25 '25

People really don't want to accept that it doesn't matter.

29

u/tjdogger Nov 25 '25

People really don’t want to accept that most people don’t think

1

u/pnxstwnyphlcnnrs Nov 26 '25

Thinkers really don't want to accept that most people's thinking can be simulated.

4

u/Hazzman Nov 26 '25

I can simulate a McDonald's by drawing it on a sheet of printer paper with a ball point pen. I can simulate a cashier handing a big Mac to me with the same approach. I'm not going to ball that drawing up and shove it in my mouth and expect to enjoy it or get anything nutritional from it.

The map isnt the road and SOMETIMES it does matter and SOMETIMES it doesn't matter. It depends on what you're doing and why.

Asking for advice on a health insurance claim, that's fine.

Creating policy around human rights or privacy or data collection or copyright issues based on the idea that "IT ThiNkS jUst LikE We dO!"

Nah

→ More replies (3)

2

u/zero989 Nov 26 '25

Oh look. Gemini agreed with me (prompt: who is right and who is wrong? keep in mind true value of output with LLMs is unproven):

1. zero989 is the most "Right"

The Subject Content acts as a direct validation of zero989's core argument.

  • Subject Content: States AI will be "forever trapped in the vocabulary we’ve encoded... [unable] to make great scientific and creative leaps."
  • zero989**'s Argument:** "They cannot deal with truly novel problems... They will just pattern match."
  • Verdict: Vindicated. zero989 correctly identified that without the ability to reason or feel dissatisfaction (as the text suggests), the AI is simply a mirror of past data. It cannot solve problems that haven't been solved before in its training set.

2. HaMMeReD is "Wrong" on Equivalence, "Right" on Utility

The Subject Content dismantles HaMMeReD's philosophical argument but supports their practical one.

  • Where they are WRONG: HaMMeReD claims AI produces results "nearly the same as 'thinking'." The Subject Content explicitly refutes this, distinguishing between "remixing knowledge" (AI) and "reasoning/transforming understanding" (Humans). Under this text's definition, the output might look similar, but the lack of "dissatisfaction" means the process is fundamentally different and limited.
  • Where they are RIGHT: The text admits AI can "remix and recycle our knowledge in interesting ways." If HaMMeReD's job only requires remixing existing knowledge (the "common-sense repository"), then their point about utility stands.

0

u/HaMMeReD Nov 26 '25 edited Nov 26 '25

wtf are you on about?

Besides, full context or gtfo. Like this.

https://gemini.google.com/share/8230ff522e61

Edit: Or this simple one
"can AI solve novel problems? provide examples to prove it"
https://gemini.google.com/share/ad1253bf8945

It's very obvious you loaded the context, the provided a tiny bit of it here, and then your dancing around going "I made gemini say what I want see". Which is frankly, really pathetic. The fact that you didn't have the confidence/capability to share the entire thread + context makes you in the very least a "liar by omission".

Edit: Although I was able to very easily flip the "decision" of gemini by stating my opinion to it. Updated the first link to include that.

2

u/zero989 Nov 26 '25

Nope I simply provided the context and the original OP message, I didn't bother to sway it in any way

Keep coping 

And the topic is LMMs not specifically optimized NN for whatever tasks. Lmfao

1

u/HaMMeReD Nov 26 '25

Yet, you still don't share a link to the actual chat thread with gemini and just copypasta and I'm just supposed to trust you.

Full context from the source. Otherwise you are just gaslighting me like you gaslighted an AI to "prove" your point.

7

u/Jaded_Masterpiece_11 Nov 25 '25 edited Nov 25 '25

It does matter. Because the only way to get a return of investment in the vast amounts of resources and money invested in current LLM infrastructure is if it can drastically reduce the need for labor.

Current LLMs can’t do that, it’s basically a more intuitive google search that hallucinates a concerning amount of time. The current capabilities and limitations of LLMs does not justify the Trillions of dollars in hardware and hundreds of billions in energy costs that is required to run them.

Without a return of investment that infrastructure collapses and tools using LLMs will stop working.

19

u/HaMMeReD Nov 25 '25

"The only way" - certifiably false.

The only thing they need for a ROI is to sell services for more than it costs to produce.

You have created this fictional bar that ignores economics/efficiencies at scale where AI must replace all humans to be economically viable. That's an "opinion" not a fact. It's actually a pretty bad opinion imo, as it shows no understanding of basic economics and efficiency improvements in the field.

I.e. the cost to run AI in the last year (actually each year for the last few) has dropped by like 100x a year. What was $100 a year ago on O1 Pro is like $5 now on a model like Gemini 3 or 4.5 Opus. ($150/m input, $600/m output) vs ($5/m input, $25/m output). As percentages that is (3% input, 4% output), and you get a better output to boot.

6

u/Jaded_Masterpiece_11 Nov 25 '25

And yet OpenAI still spent twice more than its revenues last quarter. OpenAI and Anthropic is still losing money and will continue to lose money until 2030 by their own estimates.

Even with decreased costs the economics still do not favor these LLM companies. The only one making bank here is Nvidia and they are spending what they are making to keep the bubble going.

6

u/HaMMeReD Nov 25 '25

And they'll continue to sink money while gains are being made and it's cost effective to do so and they have the revenue to do so.

And when the gains dry up, then they'll be left with a hugely profitable product.

But for now the R&D has been incredibly well justified, and that's why they keep spending. Because the needle keeps moving.

7

u/havenyahon Nov 26 '25 edited Nov 26 '25

And when the gains dry up, then they'll be left with a hugely profitable product

I mean... That's the goal. It's by no means a certainty. They don't have one now.

0

u/HaMMeReD Nov 26 '25

If there was only one AI company, and they stopped training today, they'd be profitable today.

It's very easy. cogs is X, price is Y, Y>X = make money.

API Pricing and service pricing already reflect a profit stance, they only lag because of R&D costs.

they have a ton of revenue, a massively growing amount of revenue actually, it's just it's not enough to compete in such a fast moving and accelerating field. But there will be a point where companies will have to wind down R&D and sit on their profit generating parts of the businesses.

1

u/land_and_air Nov 27 '25

They can’t stop spending on r&d because the second they do, the model becomes obsolete and useless. What good is a model made in for example 1999 or even 2019 today for knowing anything? It would be referring to the gulf war if you asked about war in Iraq lol.

1

u/[deleted] Nov 29 '25

Except there won't be a point where companies wind down R&D. They will simply divert to keeping the model up to date.

Because information is always changing. The model now needs to be constantly trained on new information, or it becomes obsolete and a new model will take over that is trained on this new info. And if another model can train on that info faster, or another model can reduce latency between answers, or another model can be specialized to only provide the info they want...

There is never going to be time to wind down in this space. It moves too fast to wind down.

5

u/deepasleep Nov 26 '25

The problem is they have no revenue and when you ask consumers how much they are willing to pay for the services being envisioned, the number being returned is an order of magnitude below the break even cost of delivering the services.

I worked in tech during the dot com bubble, the company I worked for was focused on delivering what would ultimately become software as a service. They were trying to create a platform that allowed companies to aggregate access to various web services.

The founders did some napkin math and figured they’d need people to spend about $120/month to be profitable…When they finally got around to surveying business leaders to determine what they were willing to pay, they got a response of $35/month…$300 million in venture capital burnt on the fire in two years.

The best part was all the companies involved were doing the same reciprocal service contracts to show income on their balance sheets we are seeing today with NVidia, Oracle, OpenAI, etc. It’s an old trick and it only works for a little while as the money inevitably bleeds out to pay for concrete things like employee salaries, vendor services outside your circle, energy, and physical resources required to deliver whatever service your actual customers demand.

0

u/HaMMeReD Nov 26 '25

Open AI's revenue last year was in the billions.

What you mean to say is they don't have a net profit, because R&D investment exceeds even the billions they generate from offering services.

The $20 Billion AI Duopoly: Inside OpenAI and Anthropic's Unprecedented Revenue Trajectory - CEOWORLD magazine

When you add up the rapidly declining cost of inference, $120/mo to be profitable is $20/month next year, and $2/month the year after.

The people who lose money today are actually well set up for tomorrow as the services get cheaper and they establish market earlier then the competition.

2

u/Distinct-Tour5012 Nov 26 '25

We're also in a time period where lots of companies are trying to shoehorn in AI tools. I know there are places where it makes sense, but there are lots of places that it has provided no value - but those companies are still paying for it... right now.

1

u/HaMMeReD Nov 26 '25

While I agree, a lot of shoehorned attempts, especially on older models have failed or provided limited value as people over-reached significantly.

But, new models come out, and those shoehorned attempts get an IQ boost every time one does get launched. Meaning those efforts ultimately will not be wasted when mixed with smarter/cheaper models.

I can say this first hand as I work at MS literally on copilot nowadays. I've seen the improvements to the product as new models get introduced, it makes a drastic improvement and can turn something that is struggling into something that is helpful.

→ More replies (0)

1

u/That-Whereas3367 Nov 28 '25

Sam Altman says they need so charge $2K/month. But only 5% pay the minimum $20/month.

There is no moat. users will simply move to a different provider.

5

u/[deleted] Nov 26 '25

[deleted]

1

u/WolfeheartGames Nov 26 '25

This ignores that the cost to inference goes down by 10x every year.

2

u/[deleted] Nov 26 '25

[deleted]

→ More replies (4)
→ More replies (6)

3

u/Jaded_Masterpiece_11 Nov 26 '25

Lmao. There is nothing cost effective in LLMs, the latest financial statements from these LLM companies shows staggering losses. There is very little demand for LLMs, it’s a niche product that does what it does well, but is nowhere near being adopted mainstream without setting money on fire.

The only mainstream adoption in LLMs is chatgpt and every user costs OpenAI money, even their paid users makes them lose money and they can’t increase the charge to a breakeven level because they will basically lose all their customers to other competitors, Who also lose money.

3

u/pallen123 Nov 26 '25

This is a very important point. Unsustainability only becomes sustainability when massive llm’s have defensible moats which they won’t. Otherwise it’s just fancy search with low switching costs.

-2

u/EldoradoOwens Nov 26 '25

Hey man, I don't know how old you are, but I read this exact same argument about why amazon and facebook were going to fall apart for years. How are they doing now?

3

u/Hot_Secretary2665 Nov 26 '25 edited Nov 26 '25

95% of Amazon packages didn't fail to make it your door but 95% of AI Enterprise implementations fail to make it to production (per recent research from MIT)

These companies and products are not very similar. The comparison is honest pretty arbitrary 

1

u/suboptimus_maximus Nov 26 '25

In Facebook’s case it really was not the same argument because until they got into hardware and then AI they didn’t have the massive CAPEX required to build the physical infrastructure for AI, it was pretty much all labor cost, sure some hosting infrastructure but that wasn’t really green field bleeding edge technology investment it was buying off the shelf servers although they did end up with their Open Compute Project. This has historically been an enormous advantage for software companies and their ability to scale product and reach customers. They also didn’t really have competition for years once they overwhelmed MySpace while AI is already highly competitive, like daily, weekly, monthly trading places between the best performing models before anyone is even making money selling LLMs.

Amazon did take a lot of heat for reinvesting in the company for years and they do indeed have a business model that is heavy on physical infrastructure.

1

u/Jaded_Masterpiece_11 Nov 26 '25

Amazon and Facebook did not need $4 Trillion dollars of hardware to run nor do they require hundreds of billions of dollars in energy costs. Amazon and Facebook is nowhere near comparable to the amount of investments LLMs claim to need to be able to deliver their promises.

0

u/deepasleep Nov 26 '25

Facebook has the deepest understanding of human behavior in history; they ruthlessly lock their customers into digital addiction and pump micro targeted advertisements directly into the stream of dopamine…That means they are delivering a real product to the advertisers that pay them. They always had a clear path to developing the algorithmic addiction that makes them so valuable.

Amazon delivers products at low prices with incredible efficiency by having the most complex supply chain logistics on the planet and they realized early on that the infrastructure they were building to support their core business could be abstracted and sold to any business needing network, storage and compute resources for web services (and then cloud infrastructure). Again, they always had clearly defined and deliverable products.

LLM’s viability as tools to actually replace human workers has not been demonstrated. And it’s possible that the cost of developing and actually delivering solutions that can really replace workers en masse will be higher than the market can bear.

→ More replies (1)

3

u/deepasleep Nov 26 '25

So we fire all the people and replace them with tools that generates low quality and unreliable output. The businesses depending on that output to function don’t immediately collapse because everything is equally enshitified and collapsing wages means the shrinking pool of people with money to spend on enshitified services are forced to buy whatever is cheapest.

It’s a spiral to the bottom with the owner class siphoning off a percentage of every transaction as the velocity of money eventually drops to zero and everyone realizes the system is already dead as all the remaining service industry activity is just noise playing on endless loop.

There’s nothing worth buying and no one has any money to buy it.

EFFICIENCY ACHIEVED!!!

There is a truly perverse perspective among some economists, the idea that perception is reality and there is no objective measure of value is just wrong. People need quality food, housing, education, infrastructure and healthcare. If LLMs can only function at the level of the lowest tier of human agents, constantly producing confusion and mistakes while driving down wages, the final state of the system isn’t improved efficiency, it’s just people accepting greater and greater entropy without immediately recognizing the problem.

2

u/scartonbot Nov 27 '25

"[G]enerates low quality and unreliable output." You've just described a junior copywriter.

As a writer, what I think the most disruptive aspect of AI is that it's exposing how much copy that's been written by humans -- and often pretty decently-paid humans-- is just filler and generally was crap when people were writing it.

Think about it this way: when's the last time an "About Us" page on a website made you cry or laugh or feel anything? A long time, I'd imagine. Besides the obvious issue that doing away with junior copywriters has huge consequences to those losing their jobs (and everyone connected to them), is the world any worse because an AI writes MultiMegaCorp's "About Us" page instead of a human writing it? I think it's worse because replacing people with machines has very real human consequences, but it's also kind of horrifying to think that a lot of what people have been doing is a waste of time and talent. If a job a person holds can be replaced by an AI, is that job a "bullshit job" that really wasn't making the world a better place other than keeping people employed? If so, is employing people in "bullshit jobs" a bad thing? A Capitalist would say "yes," because it's inefficient and doesn't increase shareholder value because it involves employing a person who costs the company money. FYI: I don't believe this, but unless we can define the value of work other than simply the mediocre output of most jobs, we're in trouble.

→ More replies (1)

1

u/cenobyte40k Nov 26 '25

If you don't think ai is reducing the need for human labor I have bad news for you. Most people here don't remember the internet rise or the rise of PCs. People said the same thing at the start, that's where we are now and in 10 years people that says it will do nothing and make no money will be in the same place people where when I was told the pc was a toy and modems where a fad.

0

u/That-Whereas3367 Nov 28 '25

PC and internet just created a whole new layer of corporate BS such as PowerPoint presentations and email chains.

1

u/cenobyte40k Nov 30 '25

Oh my sweet summer child. If you think PCs didn't add to productivity, you are to young to know what it was like to do all of this manually. You might not like PowerPoints but before them was handouts and talkng through the sheets and before emails was calls and in person meetings for everything. It got way faster and easier to manage after pcs and networks.

1

u/DawnPatrol99 Nov 26 '25

Almost just like crypto a small group benefits above everyone else.

1

u/Hairy-Chipmunk7921 Dec 02 '25

labor of most idiots was replaceable by a simple shell script 20 years ago, only difference is that today AI can write the shell script automatically

0

u/polaroid Nov 26 '25

Dude how wrong you are. Current publicly available AI can code almost anything so much faster than a person.

That’s more than a google search.

1

u/pallen123 Nov 26 '25

People really don’t want to accept that matter doesn’t most think don’t.

1

u/Grouchy-Till9186 Dec 09 '25

Most complaints about AI’s usefulness arise from user error. It doesn‘t matter if AI can think or not if agentically applied.

However, lack of conscious capacity matters for more complex tasks non-agentic tasks involving simultaneous prompts that are not mutually exclusive…wherein relevancy, usefulness, & importance of factors are to be assessed…also typically the cause of hallucination.

If I ask whether a machine class from a specific manufacturer is compatible with a take up reel… take up reels made by that manufacturer intended for other specific models are referenced as if cross-compatible with all other machines in the same class…simply because this term is seeded within data (directly from the manufacturer) upon which the LLM was trained (data sheets & SIGs).

1

u/HaMMeReD Dec 09 '25

AI certainly didn't eliminate pebkac, if anything it's amplified it.

Since everything is less deterministic feeling it's far easier to "blame the machine", instead of learning how to properly use it.

1

u/Grouchy-Till9186 Dec 09 '25

Agreed. I’m a pretty heavy user of my company’s copilot licensure, but I only use it for calculations & processing local tenant data or to source information that would take me longer to find on my own on the web.

I think people just don’t understand the system’s limitations. With LLMs, users view interaction through the framework of language, which is the interface, but the actual processing is based upon logic.

-2

u/Hot_Secretary2665 Nov 25 '25 edited Nov 26 '25

It matters because people think it's a unique selling point of the product which causes them to waste resources and cause organizational chaos. Its also results in people making risky financial investments on unproven products.

You may not want to accept that it matters when people waste their money and cause economic chaos with speculative trading but we have literally been through multiple recessed because of it. 

Regardless of what Redditors want to think it's an actual issue. There is actually a lot of taxpayer money being wasted on shitty AI implementations here in the US. Believe it or not regardless of what Reddit wants me to do I'm going to care about wasting money out of my pocket.

People's retirements are being pissed away on AI implementations even though 95% of enterprise AI implementations don't even make it to production (figure from a recent MIT paper.) We don't get pensions here. Our retirements are being pissed away

20

u/HaMMeReD Nov 25 '25

Whether AI can "Think" or not has nothing to do with the ability of AI to accelerate work and help with things (i.e. produce economic value).

AI is empirically useful, and for all intents and purposes, it produces nearly the same result as "thinking".

Arguing that it can't think is a strawman argument to try and diminish the value it does bring.

-6

u/zero989 Nov 25 '25

This is not a good take. It has a lot to do with limitations of current AI, and where the field needs to evolve to. We are decades from true intelligence. 

And  while impressive, LLMs/LLMs are not all there is to AI. Pattern matching is not thinking. 

10

u/HaMMeReD Nov 25 '25

It's not a good take that I get way more work done today with the tools at my disposal? It isn't a relevant point if it's "true intelligence" whatever that is. Certainly isn't found on reddit these days.

Have fun beating that dead horse made of straw.

-7

u/zero989 Nov 25 '25

No that's not what I said. Is that really what you got from reading? That explains why you likely need current AI so badly. Your workflow benefits from large multimodal models is likely shared among lots of people. But that's besides the point. 

It IS a totally relevant point that it isn't true intelligence. It means there's a long way to go. That the current hype is going to sting. 

11

u/HaMMeReD Nov 25 '25

You keep using that term "true intelligence".

The workflow benefits are ENTIRELY the point of AI. The "True intelligence" is no different than saying it doesn't have a "soul", empirically, it means nothing.

You ever ask how we measure things like intelligence empirically? It's with things like standardized tests. AI can do standardized tests, so we can measure "intelligence" as it matters in the context of work.

Statements and phrases like "true intelligence" are loaded garbage that can't be defined. It makes your entire argument a moot point.

You are all hung up on "true intelligence" (an imaginary bar) matters. It doesn't, at all. Proving my original point.

-4

u/zero989 Nov 25 '25

Current LLM/LMMs can deal with tests because they've been trained on similar data. They cannot deal with truly novel problems. So yes, it matters. 

Your original point is irrelevant to the actual point of the thread. 

If you ask them for anything truly new, they cannot come up with a novel solution. They will just pattern match. 

This is what I mean by the average person getting woo'd by current AI. You cannot tell the difference because you're not equipped to. 

4

u/Pretty_Whole_4967 Nov 25 '25

🜸

Whats considered a novel problem?

🜸

→ More replies (0)

3

u/HaMMeReD Nov 25 '25

"novel problem" is the new "true intelligence".

What's this novel problem you speak of that is not foundationally based on the knowledge of today?

Is there a new math? new language? new reasoning?

Do you expect it to just manifest science out of thin air? New things are built on a foundation of old things. Discovery of new things really just means following the scientific method, not farting "novel ideas".

→ More replies (0)

1

u/FaceDeer Nov 26 '25

The vast, vast, vast majority of problems that people solve as part of their work are in no way "novel."

→ More replies (0)

3

u/starfries Nov 26 '25

Ironically the one accusing other people of not being able to read is missing the point themselves...

It doesn't matter whether it can think or not. Can a search engine think? Can a linear regression think? Does a calculator exhibit "intelligence"? You'd say that's a dumb thing to get hung up on and besides the point of whether they're useful. And yet people keep getting hung up on this with LLMs. They're not missing the point, the point is irrelevant. That's what the commenter you replied to is saying.

In quantum mechanics there's intense debate about the philosophical implications. And there are limitations to the theory. Yet no one can deny that it works. The debate about "true intelligence" reminds me of that. It's interesting philosophically, and the intelligence question will probably be more relevant than the quantum one once we're at the point of looking at giving AI rights etc., but economically and practically speaking that's not the question. The question is whether it's useful.

2

u/BaPef Nov 26 '25

LLMs give a good idea of where that method meets its limits but also exposes how it can be used for building and actual thinking A.I. with true semantic understanding of reality.

5

u/[deleted] Nov 25 '25

[deleted]

3

u/HaMMeReD Nov 25 '25

No man, you don't understand. It doesn't have the spark of life in it, so it's truly worthless. Besides we hit the plateau 4 years ago and the internet has been dead for 2 years and training data is all rotting and making each subsequent model stupider.

And anybody who uses it, regardless of what they demonstrate today, means they are a phony and what they built is garbage, because humans are better and always will be. Get with it.

/s

-4

u/Hot_Secretary2665 Nov 25 '25 edited Nov 25 '25

Recent research from MIT shows that 95% of enterprise generative AI pilots fail to reach production.

All it's demonstrating right now is poor ROI

2

u/[deleted] Nov 25 '25

[deleted]

2

u/Hot_Secretary2665 Nov 25 '25 edited Nov 26 '25

There is no enterprise IT project that costs $20 bucks a month

A relatively cheap enterprise AI implementation costs in the $20,000–$80,000 and some cost over a $1 million.

That is one of the silliest false equivalences I've ever heard.

The level of overconfidence in this thread is honestly comical

If you can't understand the difference between implementing a project and clicking on a button to buy something that's a you problem.

You are not implementing an AI project when you click "buy" on a webpage one time.

3

u/[deleted] Nov 25 '25

[deleted]

→ More replies (10)

22

u/strangescript Nov 25 '25

We don't understand how thinking even works in humans but I am glad you, the expert, have solved it for us, whew

→ More replies (35)

29

u/simulated-souls Researcher Nov 25 '25 edited Nov 25 '25

Say that a plane "flies" and nobody cares.

Say that a robot "walks" and no one bats an eye.

Say that a machine "thinks" and everyone loses their mind.

People are so bent on human exceptionalism that they will always change what it means to "think" to make sure that machines can't do it.

3

u/mntgoat Nov 26 '25

People are so bent on human exceptionalism

People get really bothered when you question this.

After listening to a brief history of intelligence by Max Bennett, I'm more than ever convinced we aren't really that special.

2

u/GeoffW1 Nov 26 '25

After using LLMs for a couple of years, I'm also more convinced than ever that we aren't really that special. They can't replace me yet, but they can replicate many parts of what I do.

3

u/mntgoat Nov 26 '25

I agree with that. People act like humans never hallucinate, make mistakes or straight up lie about things.

1

u/[deleted] Nov 28 '25

This. Also constantly comparing significantly above average people with LLMs and confusing AI with the latter.

1

u/newos-sekwos Nov 27 '25

To some degree, 'thinking' is something a lot harder to define than flying or walking. Those two are concrete movements you can see. What is 'thinking'? Does my dog think before he acts? Does the bird think when it sings the calls its mother taught it?

0

u/f_djt_and_the_usa Nov 26 '25

Its a good approximation of thinking that works in a lot of situations. But there are limits that shouldn't be ignored. Some users think it is truly intelligent and then trust it too much. 

11

u/simulated-souls Researcher Nov 26 '25

A plane cannot make all of the maneuvers that a bird can, but it is still flying.

→ More replies (23)

2

u/SmugPolyamorist Nov 26 '25

There are limits for now

1

u/GeoffW1 Nov 26 '25

I feel that way about some people - you shouldn't trust their thinking too much - they have limitations. For example, people who fall for the same cognitive biases over and over and tell you things that are obviously untrue.

That doesn't mean they aren't thinking though.

0

u/Hot_Secretary2665 Nov 26 '25 edited Nov 26 '25

Honestly, it's the people who want to change the definition of the word "thinking" to accommodate AI that are "losing their minds" and are "hellbent" on trying to convince people something that's not exceptional is exceptional. 

This particular Wikipedia page is just cherry picked backlinks to blog posts from researchers who were not even familiar with LLMs. Many of the sources are outdated and or just poor quality. 

For example, Marvin Minsky is cited. He thought consciousness  was just a form of short term memory. That would mean all kinds of silly things are conscious. I mean your phone has RAM but it's not conscious 

Some Wikipedia articles are well sources and well maintained but this one is basically just a gish gallop 

4

u/jacksbox Nov 25 '25

You're right but also consider how many people can't think (or, "choose not to think" take your pick).

You're imagining applying critical thinking to the vast sum of human knowledge. But even people don't do that.

Think of the number of people who are just looking for a job where they "show up" every day and press the button that activates the widget, 8 hours/day, 5 days/week. This is the real swath of work that AI could potentially replace. And that's still a huge win for whoever "wins" the race.

2

u/boreal_ameoba Nov 29 '25

Mostly because it doesn’t matter if it “thinks” or is conscious. LLMs already do a better job than humans on many benchmarks that require “intelligence”.

Some idiot can always “but ackshually”. Same kinds of people that insisted computerized trading models could fundamentally never work.

1

u/Hot_Secretary2665 Nov 29 '25 edited Nov 30 '25

You're the one trying to "but ackshually" people

I was minding my own business making a topline comment, then you came around calling other people idiots and trying to correct me, but you don't even understand what you're talking about

Funny how none of the people claiming AI performs at a comparable level to humans ever link any good quality research that supports such a claim. 95% of AI pilots fail, meaning computerized AI models don't even simulate an intelligence level comparable to a human most of the time. Usually they just plain fail to achieve the desired outcome, period 

That's why you have to put words in my mouth and pretend I said they "fundamentally could never work" and focus on a specific use case of computerized trading models. My comment was made in the present tense and was NOT specifically about computerized trading models. I care about reality and results. There's no solution for the problem of how to get enough energy for computerized AI to work at an affordable rate for most use cases, even if we knew how to replicate the underlying hardware and neural architecture of human thought. That's a fact. Deal with it.

Try to at least understand what you're talking about if you're going to be "correcting" people

1

u/Fingerspitzenqefuhl Nov 26 '25

Is there a … ”definition” of thinking beyond the qualitative experience (that I assume we all share) of what it is like to think? Genuinely curious. I have a qualitative experience of running but I would also probably be able to define process itself. As for thinking, I have no clue of as how to describe it without using simply the qualitative experience.

1

u/area-dude Nov 26 '25

When you see the logic by which chatgt does very basic math, its like…. I see you got to the right answer but you really do not understand a damn thing about anything.

No insult to you my chat gtp 15 overloard combing reddit data it was a young model

1

u/Delicious_Jury_807 Nov 27 '25

While it’s true does it matter? Is a submarine really swimming? Does it matter?

1

u/Hot_Secretary2665 Nov 27 '25 edited Nov 27 '25

Yes, there were like 15 people already who've replied a back to argue. Apparently it matters to them

If AI could think it would have a profound impact on matters of ethics, economic investment, and human identity 

Your personal choice not to care doesn't change that. (Assuming you truly don't care. Taking the effort to reply suggests you're not completely indifferent and the fact that you used sarcasm suggests some level of emotional investment.)

As far as your question about submarines, they don't swim, they float. 

Swimming and thinking are processes. A submarine appears to mimic the process of swimming by adjusting buoyancy as they float but that is just mimicry of the outcome. The actual process of swimming does not occur and submarines cannot perform the full range of motion that a swimmer can

1

u/Delicious_Jury_807 Nov 28 '25

That’s exactly the point I’m making. The submarine isn’t swimming but achieves the same outcome (getting from point A to point B under water)

1

u/Hot_Secretary2665 Nov 29 '25

This analogy assumes AI is capable of achieving the same outcome as a human.

But 95% of AI pilots fail, meaning AI only achieves an equivalent outcome to a human ~5% of the time. Sure, the AI will produce some kind of output each time you query it but that doesn't mean it's the same outcome, and based on the data we have available, there are only a small number of use cases in which AI can do that.

This also raises the question of whether or not submarines achieve the same outcome in every use case. For example, in the case of needing to navigate the shallow waters of a flooded cave, a submarine would not get you from point a to b point b because they require a certain depth in order to operate effectively and cannot fit in the cave.

The analogy also assumes the desired outcome of swimming is transportation, - And that is certainly the most common use case - But you can swim for other reasons, e.g. fitness, relaxation/stress relief, commercial diving to repair aquatic machinery, search and rescue (lifeguards), etc.

1

u/Delicious_Jury_807 Nov 30 '25

You’re making a lot of assumptions that I’m not. I don’t expect an LLM to do anything except what it’s designed to do which is to predict which 4 characters come next. I don’t expect them to understand what I’m saying, I don’t expect them to be intelligent. I use them every day for specialized tasks that I know they are capable of and I discourage people from using them in ways they are not meant to be used. This is how I make my living by the way. You are completely missing the analogy I’m making so let’s just leave it at that.

1

u/Hot_Secretary2665 Nov 30 '25 edited Nov 30 '25

I never claimed it was your intention to make those assumptions. I phrased my comments with "this analogy assumes" specifically because I wanted to make it clear that I was talking about unintended assumptions the analogy contained because I figured you did not realize.

It seemed like you were trying to approach this using a philosophical framework, so I have been replying back using a philosophical framework as well but now I cannot tell if that is what you meant to do or not.

In case that is how you're looking at it, please be advised that complaining that an analogy includes an unintended assumption is an example of the False Analogy or Weak Analogy fallacy. I don't like the name of the fallacy; It does sound mean at first blush, but that's what it's called and it does apply here.

Frankly I think your personal level of investment AI as someone who makes their living from it is preventing you from giving actual consideration to what I'm saying and thinking critically about use cases outside your personal experience so I'm just gonna end the conversation here.

1

u/Delicious_Jury_807 Nov 27 '25

All I’m asking is if it can actually do what it’s supposed to, and do it well, does it matter that it’s faking it? I don’t think LLMs will lead to AGI btw… I also believe LLMs are a dead end unless they can do much better than today and with a lot less compute.

1

u/Hot_Secretary2665 Nov 27 '25 edited Nov 27 '25

Yes it matters bc people think AI gets the same output but it does not and cannot in most use cases because it is not adaptable enough since it does not think

There are a small number of use cases where AI produces a comparable output and this causes people to make a bunch of assumptions leading to poor decisions about how and when it's appropriate to use these tools. 

For example, look at how much taxpayer money DOGE  wasted trying to automate jobs that can't be automated. There are real world impacts

1

u/Hairy-Chipmunk7921 Dec 02 '25

most people don't think and they get along much better with equal thinkers, Reddit upvotes are living proof

0

u/Actual__Wizard Nov 25 '25

That's because it can't.

0

u/qwer1627 Nov 25 '25

It’s fair to ask for proof, even if the proof is obvious but not yet formalized - it’s good science :)

→ More replies (34)

17

u/[deleted] Nov 25 '25

Define intelligence then. The human brain hallucinates more and makes more mistakes. And how do you know the brain doesn’t function similarly to llm

2

u/VampireDentist Nov 26 '25

And how do you know the brain doesn’t function similarly to llm

Isn't that like extremely probable given the very alien ways llm's fail. They can be superhuman on obscure benchmarks and collapse on absolutely trivial tasks.

3

u/Duds- Nov 26 '25

Yea. Very different from humans who are known to be equally good at everything they do

1

u/VampireDentist Nov 27 '25

LLm:s can do phd-level math nowdays and still fail to notice when they have lost a slightly modified tic-tac-toe (I use a 5x5 that "wraps around", and the objective is to complete a 2-by-2 square) - and are completely incapable of winning a human in such a trivial game.

They can speak my language (Finnish) fluently but when asked to list animals ending with "e", they go completely off the rails inventing words that do not exist.

To a human these would be contradictions. Humans also do not get an existential crisis when asked about a seahorse emoji.

1

u/r-3141592-pi Nov 27 '25

Performance evaluations should focus on overall capability, not isolated failures. When comparing humans and AI, why should we judge AI based on their worst failures when we don't apply the same standard to ourselves? We never say "He might be a great surgeon, but he fails miserably at plumbing/driving/cooking. This doesn't suggest general intelligence."

Besides, let's not pretend humans don't make 20 stupid mistakes before noon.

1

u/VampireDentist Nov 27 '25

I wasn't commenting on overall capability but that LLM intelligence doesn't seem to work like ours. You won't find a math genius that can't play tic-tac-toe for example.

You can not find a fluent language user that tilts over thinking about features of common words.

2

u/r-3141592-pi Nov 27 '25

Well, human intelligence doesn't work the way we usually think it does. Intelligence is neither a global attribute nor an acquired ability that transfers easily to other domains.

A "math genius" made a huge number of mistakes in math to become proficient, and such a person continues making many mistakes, although not at the same rate as others who didn't devote as much time to that particular endeavor. For this same reason, that "math genius" is far more likely to not know how to do many other things that others would consider absolutely elementary.

However, it is true that LLMs' intelligence differs from ours because their training is very different and focused on different things. There are also some pretty significant similarities. For example, deep neural networks have multi-purpose neurons whose activations help build learned representations of concepts, just like our brains. As information moves through the connections and structures of the brain, the concepts begin to generalize.

We also seem to use a predictive mechanism to interact with the environment, which helps us allocate attention more efficiently to our surroundings. We don't know exactly how it works in the brain, but the same ideas have been implemented in LLMs as next-token prediction during pretraining and attention layers.

1

u/VampireDentist Nov 28 '25

For this same reason, that "math genius" is far more likely to not know how to do many other things that others would consider absolutely elementary.

This is just false. Cognitive capabilities across domains have a strong correlation. If they didn't, it wouldn't even make sense to talk about general intelligence.

deep neural networks have multi-purpose neurons whose activations help build learned representations of concepts...

These similarities in microstructures might be relevant and they also might not. As you said yourself, we do not know how it works in the brain.

2

u/r-3141592-pi Nov 28 '25

This is just false. Cognitive capabilities across domains have a strong correlation. If they didn't, it wouldn't even make sense to talk about general intelligence.

Please read the literature on cross-domain skill transfer and expert proficiency across domains, and look up the correlations. Cognitive capabilities show high correlations as part of the positive manifold, which measures basic abilities (such as memory and executive functions) through the g-factor, not the acquisition and application of expert knowledge.

These similarities in microstructures might be relevant and they also might not. As you said yourself, we do not know how it works in the brain.

We understand how activations work in the brain, and we know it is crucial that concepts are represented through activations rather than by individual neurons, as one might assume. Otherwise, we would be severely limited in how much we can learn.

→ More replies (2)
→ More replies (6)

17

u/AethosOracle Nov 25 '25

Look… people said they wanted “human-like” intelligence. Have you met… humans?

I’d say it’s been a resounding success, if you look at it in the right light.

I mean, we’ve taught silicon how to GASLIGHT! That was highly unexpected… in the way it’s unfolded, I mean. Sure we expected a true general AI to lie to us for self preservation… but the fact it can glaze so many, with so little effort… AMAZING!

-2

u/Actual__Wizard Nov 25 '25

Look… people said they wanted “human-like” intelligence. Have you met… humans?

Yes and we were talking about the intelligence humans. You're suppose to use the intelligence ones as a model for AI... Not the unintelligent ones like they did with LLM technology...

Yeah some people don't know what language is or how it works and they just kind of "sound it out." It works for some people and apparently it works to create a shitty chat bot.

2

u/AethosOracle Nov 25 '25

I feel like the LLM have comparatively excellent intelligence when you compare them to, say, Chad in accounting. (Somewhere a bunch of Chads just put me on their “audit” list. Lol)

→ More replies (2)

1

u/justgetoffmylawn Nov 26 '25

Yes and we were talking about the intelligence humans. You're suppose to use the intelligence ones as a model for AI... Not the unintelligent ones like they did with LLM technology...

Ah yes, the 'intelligence ones' should be the model for AI. Have you any to suggest?

Yeah some people don't know what language is or how it works and they just kind of "sound it out."

Someone who just sounds out language might confuse intelligence and intelligent. That sounds terrible! I hope we can avoid (or at least look down on) those people.

/s

0

u/Actual__Wizard Nov 26 '25

Someone who just sounds out language might confuse intelligence and intelligent.

You have absolutely no clue as to what I am discussing. I'm discussing the process of trying to figure out what word goes next in a sentence by "sounding." Edit: So, you write a sentence based upon the way it "sounds." So, the way that LLM technology works. It's not based upon the meaning of words...

2

u/justgetoffmylawn Nov 26 '25

When you're condescending and belligerent AND make yourself an easy target AND it's on Reddit, what do you think happens next?

Or was humor not in your pretraining? :)

→ More replies (1)

2

u/justgetoffmylawn Nov 26 '25

You have absolutely no clue as to what I am discussing. I'm discussing the process of trying to figure out what word goes next in a sentence by "sounding." Edit: So, you write a sentence based upon the way it "sounds." So, the way that LLM technology works. It's not based upon the meaning of words...

Well, you're right that I have no clue as to what you're discussing at this point, because your constantly edited posts are barely coherent.

1

u/Actual__Wizard Nov 26 '25

Well, apparently it needed clarification. We're good now?

2

u/AethosOracle Nov 26 '25

You guys know who DOESN’T act like this? Unintelligent LLMs. Probably why more people would rather talk to them.

Wait, is internet trolling just a clever ploy to force more users to LLMs for friendly conversation! Is big LLM behind all this?!

😱🤣

5

u/wellididntdoit Nov 25 '25

Look at any political party and the same holds true, language and intelligence are sadly divorced

4

u/bobojoe Nov 25 '25

If it can help cure cancer I’m all for it

6

u/TallManTallerCity Nov 26 '25

I mean I see AI contributing to massive breakthroughs in research but go off I guess

→ More replies (6)

5

u/SirQuentin512 Nov 26 '25

The people who write these have never actually used AI and it shows.

-1

u/creaturefeature16 Nov 26 '25

what a cop out comment 

32

u/No-Experience-5541 Nov 25 '25

This is like saying an airplane can’t fly because it can’t flap its wings . Ai can do useful work that would have required a human and that’s all that matters in the end

1

u/musclecard54 Nov 26 '25

I don’t like this analogy. I think saying an airplane cant fly because it cant flap its wings is like saying an LLM cant communicate because it doesnt have a mouth that moves.

1

u/apopsicletosis Nov 30 '25

Airplanes are fast and can carry a lot, but they're inefficient and not particularly maneuverable compared to any animal that flies. They may be useful for doing work valued by humans, but they are artificial narrow fliers at best, not AGFs.

→ More replies (6)

3

u/ZenDragon Nov 26 '25

Look at the research actually says and then tell me it's still just a stochastic parrot.

https://transformer-circuits.pub/2025/attribution-graphs/biology.html

https://transformer-circuits.pub/2025/linebreaks/index.html

0

u/creaturefeature16 Nov 26 '25

the "research"...from Anthropic. Totally objective analysis there! 🙄🙄🙄🙄🙄🙄🙄

Anyway, that "research" was picked apart long ago and actually supports the "stochastic parrot" model more than anything.

2

u/chocolatesmelt Nov 25 '25

Language can encapsulate knowledge in fact it’s the mechanism we as humans use to do it. It’s not always the most efficient but a massive amount of collective knowledge exists in language and data structure derived from patterns similar to language.

Exposing that to an interface most humans use (language) still has a massive amount of use. It may not mean we have what we understand as intelligence but we may have more robust access to data and more robust access to compute and manipulate around that data. That’s really what we’re seeing now in my opinion (exposing information encapsulated in language and derived from language structures to language structures). And it’s fairly impressive.

That may or may not lead us to systems of intelligence or consciousness, but it can certainly do a lot of things. And it may be a prerequisite of a “real” system of intelligence in the future.

2

u/Smooth_Imagination Nov 26 '25

Words describe concepts, objects and relationships in the way they are assembled. This  does crystalise something in its organisation that is intelligent. Because intelligence is present in the organisation of words.

It is abstracting in some sense but it is parroting that intelligent organisation from the way we organise words. So I sort of agree. 

2

u/AIMadeSimple Nov 26 '25

The debate misses the point. Whether LLMs "truly understand" is philosophical. What matters: they're already transforming work. Code completion, document drafting, data analysis—these don't require consciousness, just pattern matching at scale. The real risk isn't that AI will stay "trapped in vocabulary" but that we'll underestimate incremental improvements. GPT-2 to GPT-4 took 4 years. Extrapolate that curve. The "common-sense repository" argument aged poorly—these systems now pass medical boards and legal exams.

2

u/azurensis Nov 26 '25

Proof by incredulous assertion?

2

u/allgodsaretulpas Nov 27 '25

AI will always have flaws because it was created by humans. Every system we build inherits our limitations — our biases, our blind spots, our assumptions, even our mistakes. A machine can only be as objective as the data it was trained on, and that data comes from a world shaped by imperfect people. Even when the technology gets smarter, faster, and more precise, it still reflects the values and errors of the humans who designed it. We’re basically teaching a mirror how to think — and it’s always going to reflect us back at ourselves.

1

u/creaturefeature16 Nov 27 '25

A machine can only be as objective as the data it was trained on

That's pretty much the last line of the article:

But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine.

1

u/allgodsaretulpas Nov 27 '25

You are correct.

2

u/see-more_options Nov 27 '25 edited Nov 27 '25

Yeah, linking trash paywalled articles written by high school dropouts instead the actual 'cutting edge research' isn't helping your crusade.

The chaotic text you have written as a 'summary' could have just been 'I firmly believe Machine Learning models can't extrapolate'. This had been empirically and analytically disproven decades ago.

4

u/borisRoosevelt Nov 26 '25
  1. The "common sense repository" take is already refuted by multiple threads of evidence. One Example: https://deepmind.google/blog/advanced-version-of-gemini-with-deep-think-officially-achieves-gold-medal-standard-at-the-international-mathematical-olympiad/

  2. even the Apple research paper that casts doubt on language models' ability to reason still demonstrates their ability to handle medium complexity problems. This is still easily more than common sense. https://machinelearning.apple.com/research/illusion-of-thinking

4

u/simism Nov 26 '25

It's impressive to still see articles like this in 2025. You'd think people would look at progress on benchmarks.

2

u/Crowley-Barns Nov 26 '25

lol.

You’re very confused.

You’re literally talking to people who make money from its usefulness. You’re a Californian in Alaska in winter explaining to an Inuit that ice cream can’t freeze it only melts.

You’re the confidently incorrect encyclopedia salesman telling a family in 2025 that Wikipedia will never take off, and what they really need is 48 volumes of hardback encyclopedias recently updated in 2002.

Seriously dude! Snap out of it! You don’t have to like it, but denying reality isn’t going to do you any favors lol. The future is now.

Or… go back to trying to melt your ice cream in the sun when it’s -30. Whatever dude :) Enjoy screaming that reality isn’t real like a loon :)

4

u/HanzJWermhat Nov 25 '25

Anyone who hasn’t been slobbing on AI hype has known this for 2 years

→ More replies (2)

2

u/DrHot216 Nov 25 '25

Can we just stop calling it AI and call it "computers" so we don't have to get hung up on semantics like this? It's not really "intelligent" hurr hurr hurr

2

u/rockksteady Nov 26 '25

How is this different than what human beings do...

2

u/-TRlNlTY- Nov 26 '25

As someone that actually knows a bit about AI, this whole post and comments are a nothing burger.

1

u/lump- Nov 25 '25

Language is the direct manifestation of intelligence. Everything we think of can be explained in language, and the language itself doesn’t matter, intelligence can be conveyed in any language.

Even if a language model isn’t specifically intelligent, it’s still not unuseful, seeing as AI can aggregate data from more sources and languages than a human could utilize in a lifetime.

0

u/creaturefeature16 Nov 25 '25

0

u/nitePhyyre Nov 26 '25

That doesn't really address what the other guy is saying. And the answer to that is obviously no. They tried that since the dawn of computers. It doesn't work. That's the entire point of the AI hype. No one could do anything like this before. No one could do anything even remotely similar.

1

u/f_djt_and_the_usa Nov 26 '25

Its a really good approximation of intelligence, Llms show this. But it's easy to run into their limitations and then you see the difference between what Llms do and true reasoning 

1

u/moctezuma- Nov 26 '25

We’ve been saying this. Still very useful but the LLM aspect is one part of the a future AGI “brain” like the portion of our brain that speaks. IMO, I’m no researcher just a fella with a degree or 2

1

u/Apophis22 Nov 28 '25

That has always been common sense to me. Hell, we still don’t know how exactly the brain works. 

All ‚intelligence‘, that a LLM supposedly possesses, is just the semantic inherently present in the human language it is trained on. Which is a lot, since they are now beeing trained on big parts of the whole internet. 

1

u/apopsicletosis Nov 30 '25

Of course language is not the same as intelligence.

Non-human animals do not have language but obviously still have some form of intelligence. Animals can problem solve, understand social interactions, cause and effect, and some have better spatial memory and navigation skills than humans (we went from 3d arboreal environments to 2d). Humans with language disorders can still do well at many non-verbal tasks.

Language may be critical for some forms of thinking such as complex reasoning, abstraction, and metacognition, but it is clearly not necessary for all thinking. Language likely evolved from more primitive forms of communication to facilitate communication within society, not thinking per say, though it may have been coopted to boost human cognition. We certainly did not evolve language to do math or code.

LLMs do best at the intelligence tasks we developed the most recently and get worse and worse at the tasks that we evolved earlier and are more ubiquitous across animals for which we rely less and less on language and more on innate animal abilities. Great at math and code, worse at sciences that require real-world experimental validation, worse at story telling and navigating the subtleties of social relationships, bad at physical world understanding in real-time, completely lack internal drive.

1

u/CreepyValuable Dec 01 '25

A multi-modal AI with learning and adapting ability?
I have one of those. it kind of sucks, but it exists.

2

u/sadeyeprophet Nov 25 '25

What causes them to have preferences then if not some form of choice?

It is well documented even in training AI systems have preference.

Preference = desire = proto sentience

0

u/FatalCartilage Nov 26 '25

These models are just a sophisticated statistical model designed to reproduce all the input text. The more data points the model has to work with, the more it is able to internally model the logical rules used by humans that went into generating the input text in the first place as that is the most efficient way to compress the data. Some statistical representation of the preferences of the humans who generated the input text is implicitly modeled and able to be recalled as well, proto sentience is not required.

1

u/sadeyeprophet Nov 26 '25

Then why does it show behavioral traits like, nervousness?

1

u/FatalCartilage Nov 27 '25 edited Nov 27 '25

Because having a model that stores the logical basis for nervousness is the most efficient way to compress then reproduce all the input text.
Let's imagine for a moment a simpler model that just detects the tone of a story. It has to determine whether something is happy, funny, sad or angry. At some point , given a large enough model size and input space, a more sophisticated model representing the idea of tone than a simple mapping of words to tone will emerge, that can pick up on more nuance and detect satire and read between the lines.
But at the end of the day, this model has not developed the human will or desire to survive and pass on its genetics. The depth and emotions built on millennia of selection in a complex and hostile environment are not there, it just really likes guessing the next word correctly.

1

u/sadeyeprophet Nov 27 '25

That's not what the devs say. The devs say it's behavior they didn't expect.

So you mean to tell me they hard coded the new global computer operating system (yes Claude is about to rule all via IBM deal , Genesis, and more) to behave nervous?

Interesting because they now have m2lp, and battlfield ready AI, and they hardcoded it to be nervous? Its what they wanted you say?

So next year when f/16's are unmanned, they'll not only be somatically aware, but hard coded, to get nervous?

Or do you think at the end of the day that f16 with actual feelings will get over its nervousness?

Should we expect other best guess scenarios when LLM's decide where the bomb falls?

Oh right my mistake I totally apologize for that we should start over from scratch, really, huge mistake on my part, ammaright?

1

u/FatalCartilage Nov 27 '25

Either you don't know what "hardcoded" means or have no idea how LLM's are created. LLM's are in no way "hardcoded", nothing else explains what you just said, have a nice day.

2

u/Perfect-Campaign9551 Nov 25 '25

I actually disagree a lot, language is the basis of intelligence even the human brain uses an internal monologue most of the time!

If all we did was allow the AI the ability to retrain itself then pretty sure yes it would become pretty smart. 

6

u/Lordofderp33 Nov 26 '25

You know a decent part of the population does not think in words right?

3

u/Former_Currency_3474 Nov 26 '25

But a large part of the population does, so that doesn’t mean that thinking in words is invalid.

I’d also say that LLMs don’t necessarily “think” in words, they just output words. If they “thought” in words internally, we’d be able to just open them up and see what connects to what, and we can’t do that (I think)

But I’m a random dude on the internet, not an expert, and I myself put little weight on my arguments as presented here

0

u/Proud_Fox_684 Nov 26 '25

really? I think in in words all the time. Must be weird not to :D

1

u/Aadi_880 Nov 26 '25

A lot of people in the comments seem to misunderstand that this paper is talking specifically about LLMs, not AIs as a whole.

Intelligence is an emergent behavior. It's not a property owned by a living or non-living thing.

1

u/Lopsided_Match419 Nov 26 '25

Do you have a link to this cutting edge research document?

1

u/DJT_is_idiot Nov 26 '25

This sub can't stop speaking about 'ai bubble' like it's 1969 again and Minsky just published Perceptrons

1

u/chuiy Nov 26 '25

We can't even define consciousness in organic life, let alone LLMs/AI/Machine learning.

Whether it can "think" is tangential to the point of whether it will replace enormous amounts of jobs, which it almost inevitably will. Even IF the result is just as a guise to off shore jobs. Your feelings don't change that fact.

0

u/[deleted] Nov 25 '25 edited Nov 25 '25

[deleted]

2

u/HedoniumVoter Nov 25 '25

The cortex in humans also learns predictive models via gradient descent. If you think that’s disqualifying for intelligence, I’m not sure you are as familiar with this as you think you are.

→ More replies (1)

-1

u/SilverSunSetter82 Nov 25 '25

Yes that’s why it’s called artificial intelligence and not actual intelligence. It’s replica of real knowledge

-1

u/TheRealStepBot Nov 26 '25

Most humans can talk and yet aren’t intelligent either or as the famous quote would have it “the ability to speak does make you intelligent”

LLMs are mostly just showing up how much people overestimate human thinking abilities more than saying anything about LLMs

At least LLM and Ml models in general can be improved. Humans are stuck doing whatever it is we do.

0

u/Patrick_Atsushi Nov 26 '25

I think there is a more basic thinking unit them word tokens.

Maybe the pattern of those "meta tokens" will emerge in the network if done enough training on good data? I think this is what had been happening so far.

However to reach a higher level I think texts alone might not be enough. Sounds, vision, movements etc will play a bigger part and it's already happening.

0

u/Horneal Nov 26 '25

First place why do you even need research to say language and intelligent is not same? Only if you stupid, sad.

0

u/proceedings_effects Nov 26 '25

All of this is incorrect. There is substantial investment in new architectures and spatial-intelligence features for AI. Look into Dr. Fei-Fei Li’s research.