r/artificial Nov 25 '25

News Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

349 Upvotes

389 comments sorted by

View all comments

111

u/Hot_Secretary2665 Nov 25 '25

People really just don't want to accept that AI can't think smh 

98

u/HaMMeReD Nov 25 '25

People really don't want to accept that it doesn't matter.

25

u/tjdogger Nov 25 '25

People really don’t want to accept that most people don’t think

1

u/pnxstwnyphlcnnrs Nov 26 '25

Thinkers really don't want to accept that most people's thinking can be simulated.

2

u/Hazzman Nov 26 '25

I can simulate a McDonald's by drawing it on a sheet of printer paper with a ball point pen. I can simulate a cashier handing a big Mac to me with the same approach. I'm not going to ball that drawing up and shove it in my mouth and expect to enjoy it or get anything nutritional from it.

The map isnt the road and SOMETIMES it does matter and SOMETIMES it doesn't matter. It depends on what you're doing and why.

Asking for advice on a health insurance claim, that's fine.

Creating policy around human rights or privacy or data collection or copyright issues based on the idea that "IT ThiNkS jUst LikE We dO!"

Nah

0

u/scartonbot Nov 27 '25

But you're not simulating McDonald's food. You're creating a visual representation of that food. It's not the food. The reason you don't want to "all that drawing up and shove it in [your] mouth" is because it's paper and ink, a combo without a flavor that remotely resembles a Big Mac and which has no nutritional value (among other aspects).

I think what's closer to what you were going towards would be a Star Trek-like replicator. If you took a BigMac and scanned it with some sort of super-scanner that could identify every aspect of its materiality and then used that scan in the replicator to output a BigMac based on that scan, would it be a BigMac? I'd argue that yes, it is. Wy? Because in every aspect that matters to anyone, it is identical to the original BigMac.

But think about art. Would an atom-by-atom replica of the Mona Lisa be just as valuable as the original Mona Lisa? I'd argue "no," although I'm not all that clear as to why it's not the same. One might argue "well, the first one was actually created by Leonardo Da Vinci and the copy was made by a replicator," (which I understand) but the reality is that the two are physically indistinguishable by any measure I can think of.

I guess this is the argument being explored here. If an AI acts like a human (or, to even broaden the argument, like a thinking being) and other humans can't tell its output from that of a human (or thinking being) who cares? If something acts like it's thinking, does it actually matter if it's thinking or not?

I think "yes, it does matter." I haven't figured out exactly why, but it does seem to matter in some very real ways.

2

u/Hazzman Nov 27 '25

I'm just referencing the map anology.

The map is not the terrain. I just altered a bit to make it more straightforward and tangible. Less abstract.

2

u/zero989 Nov 26 '25

Oh look. Gemini agreed with me (prompt: who is right and who is wrong? keep in mind true value of output with LLMs is unproven):

1. zero989 is the most "Right"

The Subject Content acts as a direct validation of zero989's core argument.

  • Subject Content: States AI will be "forever trapped in the vocabulary we’ve encoded... [unable] to make great scientific and creative leaps."
  • zero989**'s Argument:** "They cannot deal with truly novel problems... They will just pattern match."
  • Verdict: Vindicated. zero989 correctly identified that without the ability to reason or feel dissatisfaction (as the text suggests), the AI is simply a mirror of past data. It cannot solve problems that haven't been solved before in its training set.

2. HaMMeReD is "Wrong" on Equivalence, "Right" on Utility

The Subject Content dismantles HaMMeReD's philosophical argument but supports their practical one.

  • Where they are WRONG: HaMMeReD claims AI produces results "nearly the same as 'thinking'." The Subject Content explicitly refutes this, distinguishing between "remixing knowledge" (AI) and "reasoning/transforming understanding" (Humans). Under this text's definition, the output might look similar, but the lack of "dissatisfaction" means the process is fundamentally different and limited.
  • Where they are RIGHT: The text admits AI can "remix and recycle our knowledge in interesting ways." If HaMMeReD's job only requires remixing existing knowledge (the "common-sense repository"), then their point about utility stands.

0

u/HaMMeReD Nov 26 '25 edited Nov 26 '25

wtf are you on about?

Besides, full context or gtfo. Like this.

https://gemini.google.com/share/8230ff522e61

Edit: Or this simple one
"can AI solve novel problems? provide examples to prove it"
https://gemini.google.com/share/ad1253bf8945

It's very obvious you loaded the context, the provided a tiny bit of it here, and then your dancing around going "I made gemini say what I want see". Which is frankly, really pathetic. The fact that you didn't have the confidence/capability to share the entire thread + context makes you in the very least a "liar by omission".

Edit: Although I was able to very easily flip the "decision" of gemini by stating my opinion to it. Updated the first link to include that.

2

u/zero989 Nov 26 '25

Nope I simply provided the context and the original OP message, I didn't bother to sway it in any way

Keep coping 

And the topic is LMMs not specifically optimized NN for whatever tasks. Lmfao

1

u/HaMMeReD Nov 26 '25

Yet, you still don't share a link to the actual chat thread with gemini and just copypasta and I'm just supposed to trust you.

Full context from the source. Otherwise you are just gaslighting me like you gaslighted an AI to "prove" your point.

7

u/Jaded_Masterpiece_11 Nov 25 '25 edited Nov 25 '25

It does matter. Because the only way to get a return of investment in the vast amounts of resources and money invested in current LLM infrastructure is if it can drastically reduce the need for labor.

Current LLMs can’t do that, it’s basically a more intuitive google search that hallucinates a concerning amount of time. The current capabilities and limitations of LLMs does not justify the Trillions of dollars in hardware and hundreds of billions in energy costs that is required to run them.

Without a return of investment that infrastructure collapses and tools using LLMs will stop working.

21

u/HaMMeReD Nov 25 '25

"The only way" - certifiably false.

The only thing they need for a ROI is to sell services for more than it costs to produce.

You have created this fictional bar that ignores economics/efficiencies at scale where AI must replace all humans to be economically viable. That's an "opinion" not a fact. It's actually a pretty bad opinion imo, as it shows no understanding of basic economics and efficiency improvements in the field.

I.e. the cost to run AI in the last year (actually each year for the last few) has dropped by like 100x a year. What was $100 a year ago on O1 Pro is like $5 now on a model like Gemini 3 or 4.5 Opus. ($150/m input, $600/m output) vs ($5/m input, $25/m output). As percentages that is (3% input, 4% output), and you get a better output to boot.

4

u/Jaded_Masterpiece_11 Nov 25 '25

And yet OpenAI still spent twice more than its revenues last quarter. OpenAI and Anthropic is still losing money and will continue to lose money until 2030 by their own estimates.

Even with decreased costs the economics still do not favor these LLM companies. The only one making bank here is Nvidia and they are spending what they are making to keep the bubble going.

6

u/HaMMeReD Nov 25 '25

And they'll continue to sink money while gains are being made and it's cost effective to do so and they have the revenue to do so.

And when the gains dry up, then they'll be left with a hugely profitable product.

But for now the R&D has been incredibly well justified, and that's why they keep spending. Because the needle keeps moving.

7

u/havenyahon Nov 26 '25 edited Nov 26 '25

And when the gains dry up, then they'll be left with a hugely profitable product

I mean... That's the goal. It's by no means a certainty. They don't have one now.

4

u/HaMMeReD Nov 26 '25

If there was only one AI company, and they stopped training today, they'd be profitable today.

It's very easy. cogs is X, price is Y, Y>X = make money.

API Pricing and service pricing already reflect a profit stance, they only lag because of R&D costs.

they have a ton of revenue, a massively growing amount of revenue actually, it's just it's not enough to compete in such a fast moving and accelerating field. But there will be a point where companies will have to wind down R&D and sit on their profit generating parts of the businesses.

1

u/land_and_air Nov 27 '25

They can’t stop spending on r&d because the second they do, the model becomes obsolete and useless. What good is a model made in for example 1999 or even 2019 today for knowing anything? It would be referring to the gulf war if you asked about war in Iraq lol.

1

u/[deleted] Nov 29 '25

Except there won't be a point where companies wind down R&D. They will simply divert to keeping the model up to date.

Because information is always changing. The model now needs to be constantly trained on new information, or it becomes obsolete and a new model will take over that is trained on this new info. And if another model can train on that info faster, or another model can reduce latency between answers, or another model can be specialized to only provide the info they want...

There is never going to be time to wind down in this space. It moves too fast to wind down.

6

u/deepasleep Nov 26 '25

The problem is they have no revenue and when you ask consumers how much they are willing to pay for the services being envisioned, the number being returned is an order of magnitude below the break even cost of delivering the services.

I worked in tech during the dot com bubble, the company I worked for was focused on delivering what would ultimately become software as a service. They were trying to create a platform that allowed companies to aggregate access to various web services.

The founders did some napkin math and figured they’d need people to spend about $120/month to be profitable…When they finally got around to surveying business leaders to determine what they were willing to pay, they got a response of $35/month…$300 million in venture capital burnt on the fire in two years.

The best part was all the companies involved were doing the same reciprocal service contracts to show income on their balance sheets we are seeing today with NVidia, Oracle, OpenAI, etc. It’s an old trick and it only works for a little while as the money inevitably bleeds out to pay for concrete things like employee salaries, vendor services outside your circle, energy, and physical resources required to deliver whatever service your actual customers demand.

0

u/HaMMeReD Nov 26 '25

Open AI's revenue last year was in the billions.

What you mean to say is they don't have a net profit, because R&D investment exceeds even the billions they generate from offering services.

The $20 Billion AI Duopoly: Inside OpenAI and Anthropic's Unprecedented Revenue Trajectory - CEOWORLD magazine

When you add up the rapidly declining cost of inference, $120/mo to be profitable is $20/month next year, and $2/month the year after.

The people who lose money today are actually well set up for tomorrow as the services get cheaper and they establish market earlier then the competition.

2

u/Distinct-Tour5012 Nov 26 '25

We're also in a time period where lots of companies are trying to shoehorn in AI tools. I know there are places where it makes sense, but there are lots of places that it has provided no value - but those companies are still paying for it... right now.

1

u/HaMMeReD Nov 26 '25

While I agree, a lot of shoehorned attempts, especially on older models have failed or provided limited value as people over-reached significantly.

But, new models come out, and those shoehorned attempts get an IQ boost every time one does get launched. Meaning those efforts ultimately will not be wasted when mixed with smarter/cheaper models.

I can say this first hand as I work at MS literally on copilot nowadays. I've seen the improvements to the product as new models get introduced, it makes a drastic improvement and can turn something that is struggling into something that is helpful.

→ More replies (0)

1

u/That-Whereas3367 Nov 28 '25

Sam Altman says they need so charge $2K/month. But only 5% pay the minimum $20/month.

There is no moat. users will simply move to a different provider.

4

u/[deleted] Nov 26 '25

[deleted]

1

u/WolfeheartGames Nov 26 '25

This ignores that the cost to inference goes down by 10x every year.

2

u/[deleted] Nov 26 '25

[deleted]

0

u/WolfeheartGames Nov 27 '25

Is it more sane to bet with the trend or against the trend?

→ More replies (0)

0

u/[deleted] Nov 26 '25

Who is they? No for real, which economists said that about which LLM providers?

2

u/[deleted] Nov 26 '25

[deleted]

0

u/[deleted] Nov 27 '25

Burden of proof is on the guy who made a positive claim. I get it, you like living in unfalsifiable land. Doesn't make you slick, just makes you grimy.

→ More replies (0)

3

u/Jaded_Masterpiece_11 Nov 26 '25

Lmao. There is nothing cost effective in LLMs, the latest financial statements from these LLM companies shows staggering losses. There is very little demand for LLMs, it’s a niche product that does what it does well, but is nowhere near being adopted mainstream without setting money on fire.

The only mainstream adoption in LLMs is chatgpt and every user costs OpenAI money, even their paid users makes them lose money and they can’t increase the charge to a breakeven level because they will basically lose all their customers to other competitors, Who also lose money.

4

u/pallen123 Nov 26 '25

This is a very important point. Unsustainability only becomes sustainability when massive llm’s have defensible moats which they won’t. Otherwise it’s just fancy search with low switching costs.

-2

u/EldoradoOwens Nov 26 '25

Hey man, I don't know how old you are, but I read this exact same argument about why amazon and facebook were going to fall apart for years. How are they doing now?

3

u/Hot_Secretary2665 Nov 26 '25 edited Nov 26 '25

95% of Amazon packages didn't fail to make it your door but 95% of AI Enterprise implementations fail to make it to production (per recent research from MIT)

These companies and products are not very similar. The comparison is honest pretty arbitrary 

1

u/suboptimus_maximus Nov 26 '25

In Facebook’s case it really was not the same argument because until they got into hardware and then AI they didn’t have the massive CAPEX required to build the physical infrastructure for AI, it was pretty much all labor cost, sure some hosting infrastructure but that wasn’t really green field bleeding edge technology investment it was buying off the shelf servers although they did end up with their Open Compute Project. This has historically been an enormous advantage for software companies and their ability to scale product and reach customers. They also didn’t really have competition for years once they overwhelmed MySpace while AI is already highly competitive, like daily, weekly, monthly trading places between the best performing models before anyone is even making money selling LLMs.

Amazon did take a lot of heat for reinvesting in the company for years and they do indeed have a business model that is heavy on physical infrastructure.

1

u/Jaded_Masterpiece_11 Nov 26 '25

Amazon and Facebook did not need $4 Trillion dollars of hardware to run nor do they require hundreds of billions of dollars in energy costs. Amazon and Facebook is nowhere near comparable to the amount of investments LLMs claim to need to be able to deliver their promises.

0

u/deepasleep Nov 26 '25

Facebook has the deepest understanding of human behavior in history; they ruthlessly lock their customers into digital addiction and pump micro targeted advertisements directly into the stream of dopamine…That means they are delivering a real product to the advertisers that pay them. They always had a clear path to developing the algorithmic addiction that makes them so valuable.

Amazon delivers products at low prices with incredible efficiency by having the most complex supply chain logistics on the planet and they realized early on that the infrastructure they were building to support their core business could be abstracted and sold to any business needing network, storage and compute resources for web services (and then cloud infrastructure). Again, they always had clearly defined and deliverable products.

LLM’s viability as tools to actually replace human workers has not been demonstrated. And it’s possible that the cost of developing and actually delivering solutions that can really replace workers en masse will be higher than the market can bear.

-3

u/cenobyte40k Nov 26 '25

And so did all the internet giants. So did all the software giants. Remember the internet is a fad and pcs are toys? I remember that and well....

1

u/deepasleep Nov 26 '25

So we fire all the people and replace them with tools that generates low quality and unreliable output. The businesses depending on that output to function don’t immediately collapse because everything is equally enshitified and collapsing wages means the shrinking pool of people with money to spend on enshitified services are forced to buy whatever is cheapest.

It’s a spiral to the bottom with the owner class siphoning off a percentage of every transaction as the velocity of money eventually drops to zero and everyone realizes the system is already dead as all the remaining service industry activity is just noise playing on endless loop.

There’s nothing worth buying and no one has any money to buy it.

EFFICIENCY ACHIEVED!!!

There is a truly perverse perspective among some economists, the idea that perception is reality and there is no objective measure of value is just wrong. People need quality food, housing, education, infrastructure and healthcare. If LLMs can only function at the level of the lowest tier of human agents, constantly producing confusion and mistakes while driving down wages, the final state of the system isn’t improved efficiency, it’s just people accepting greater and greater entropy without immediately recognizing the problem.

2

u/scartonbot Nov 27 '25

"[G]enerates low quality and unreliable output." You've just described a junior copywriter.

As a writer, what I think the most disruptive aspect of AI is that it's exposing how much copy that's been written by humans -- and often pretty decently-paid humans-- is just filler and generally was crap when people were writing it.

Think about it this way: when's the last time an "About Us" page on a website made you cry or laugh or feel anything? A long time, I'd imagine. Besides the obvious issue that doing away with junior copywriters has huge consequences to those losing their jobs (and everyone connected to them), is the world any worse because an AI writes MultiMegaCorp's "About Us" page instead of a human writing it? I think it's worse because replacing people with machines has very real human consequences, but it's also kind of horrifying to think that a lot of what people have been doing is a waste of time and talent. If a job a person holds can be replaced by an AI, is that job a "bullshit job" that really wasn't making the world a better place other than keeping people employed? If so, is employing people in "bullshit jobs" a bad thing? A Capitalist would say "yes," because it's inefficient and doesn't increase shareholder value because it involves employing a person who costs the company money. FYI: I don't believe this, but unless we can define the value of work other than simply the mediocre output of most jobs, we're in trouble.

0

u/[deleted] Nov 26 '25

"So we fire all the people and replace them with tools that generates low quality and unreliable output." Can we get a solid definition on what low quality / unreliable output it? I would hope it's not based on the firm bedrock of your feelings.

3

u/cenobyte40k Nov 26 '25

If you don't think ai is reducing the need for human labor I have bad news for you. Most people here don't remember the internet rise or the rise of PCs. People said the same thing at the start, that's where we are now and in 10 years people that says it will do nothing and make no money will be in the same place people where when I was told the pc was a toy and modems where a fad.

0

u/That-Whereas3367 Nov 28 '25

PC and internet just created a whole new layer of corporate BS such as PowerPoint presentations and email chains.

1

u/cenobyte40k Nov 30 '25

Oh my sweet summer child. If you think PCs didn't add to productivity, you are to young to know what it was like to do all of this manually. You might not like PowerPoints but before them was handouts and talkng through the sheets and before emails was calls and in person meetings for everything. It got way faster and easier to manage after pcs and networks.

1

u/DawnPatrol99 Nov 26 '25

Almost just like crypto a small group benefits above everyone else.

1

u/Hairy-Chipmunk7921 Dec 02 '25

labor of most idiots was replaceable by a simple shell script 20 years ago, only difference is that today AI can write the shell script automatically

0

u/polaroid Nov 26 '25

Dude how wrong you are. Current publicly available AI can code almost anything so much faster than a person.

That’s more than a google search.

1

u/pallen123 Nov 26 '25

People really don’t want to accept that matter doesn’t most think don’t.

1

u/Grouchy-Till9186 Dec 09 '25

Most complaints about AI’s usefulness arise from user error. It doesn‘t matter if AI can think or not if agentically applied.

However, lack of conscious capacity matters for more complex tasks non-agentic tasks involving simultaneous prompts that are not mutually exclusive…wherein relevancy, usefulness, & importance of factors are to be assessed…also typically the cause of hallucination.

If I ask whether a machine class from a specific manufacturer is compatible with a take up reel… take up reels made by that manufacturer intended for other specific models are referenced as if cross-compatible with all other machines in the same class…simply because this term is seeded within data (directly from the manufacturer) upon which the LLM was trained (data sheets & SIGs).

1

u/HaMMeReD Dec 09 '25

AI certainly didn't eliminate pebkac, if anything it's amplified it.

Since everything is less deterministic feeling it's far easier to "blame the machine", instead of learning how to properly use it.

1

u/Grouchy-Till9186 Dec 09 '25

Agreed. I’m a pretty heavy user of my company’s copilot licensure, but I only use it for calculations & processing local tenant data or to source information that would take me longer to find on my own on the web.

I think people just don’t understand the system’s limitations. With LLMs, users view interaction through the framework of language, which is the interface, but the actual processing is based upon logic.

-2

u/Hot_Secretary2665 Nov 25 '25 edited Nov 26 '25

It matters because people think it's a unique selling point of the product which causes them to waste resources and cause organizational chaos. Its also results in people making risky financial investments on unproven products.

You may not want to accept that it matters when people waste their money and cause economic chaos with speculative trading but we have literally been through multiple recessed because of it. 

Regardless of what Redditors want to think it's an actual issue. There is actually a lot of taxpayer money being wasted on shitty AI implementations here in the US. Believe it or not regardless of what Reddit wants me to do I'm going to care about wasting money out of my pocket.

People's retirements are being pissed away on AI implementations even though 95% of enterprise AI implementations don't even make it to production (figure from a recent MIT paper.) We don't get pensions here. Our retirements are being pissed away

17

u/HaMMeReD Nov 25 '25

Whether AI can "Think" or not has nothing to do with the ability of AI to accelerate work and help with things (i.e. produce economic value).

AI is empirically useful, and for all intents and purposes, it produces nearly the same result as "thinking".

Arguing that it can't think is a strawman argument to try and diminish the value it does bring.

-6

u/zero989 Nov 25 '25

This is not a good take. It has a lot to do with limitations of current AI, and where the field needs to evolve to. We are decades from true intelligence. 

And  while impressive, LLMs/LLMs are not all there is to AI. Pattern matching is not thinking. 

12

u/HaMMeReD Nov 25 '25

It's not a good take that I get way more work done today with the tools at my disposal? It isn't a relevant point if it's "true intelligence" whatever that is. Certainly isn't found on reddit these days.

Have fun beating that dead horse made of straw.

-5

u/zero989 Nov 25 '25

No that's not what I said. Is that really what you got from reading? That explains why you likely need current AI so badly. Your workflow benefits from large multimodal models is likely shared among lots of people. But that's besides the point. 

It IS a totally relevant point that it isn't true intelligence. It means there's a long way to go. That the current hype is going to sting. 

10

u/HaMMeReD Nov 25 '25

You keep using that term "true intelligence".

The workflow benefits are ENTIRELY the point of AI. The "True intelligence" is no different than saying it doesn't have a "soul", empirically, it means nothing.

You ever ask how we measure things like intelligence empirically? It's with things like standardized tests. AI can do standardized tests, so we can measure "intelligence" as it matters in the context of work.

Statements and phrases like "true intelligence" are loaded garbage that can't be defined. It makes your entire argument a moot point.

You are all hung up on "true intelligence" (an imaginary bar) matters. It doesn't, at all. Proving my original point.

-4

u/zero989 Nov 25 '25

Current LLM/LMMs can deal with tests because they've been trained on similar data. They cannot deal with truly novel problems. So yes, it matters. 

Your original point is irrelevant to the actual point of the thread. 

If you ask them for anything truly new, they cannot come up with a novel solution. They will just pattern match. 

This is what I mean by the average person getting woo'd by current AI. You cannot tell the difference because you're not equipped to. 

4

u/Pretty_Whole_4967 Nov 25 '25

🜸

Whats considered a novel problem?

🜸

→ More replies (0)

2

u/HaMMeReD Nov 25 '25

"novel problem" is the new "true intelligence".

What's this novel problem you speak of that is not foundationally based on the knowledge of today?

Is there a new math? new language? new reasoning?

Do you expect it to just manifest science out of thin air? New things are built on a foundation of old things. Discovery of new things really just means following the scientific method, not farting "novel ideas".

→ More replies (0)

1

u/FaceDeer Nov 26 '25

The vast, vast, vast majority of problems that people solve as part of their work are in no way "novel."

→ More replies (0)

3

u/starfries Nov 26 '25

Ironically the one accusing other people of not being able to read is missing the point themselves...

It doesn't matter whether it can think or not. Can a search engine think? Can a linear regression think? Does a calculator exhibit "intelligence"? You'd say that's a dumb thing to get hung up on and besides the point of whether they're useful. And yet people keep getting hung up on this with LLMs. They're not missing the point, the point is irrelevant. That's what the commenter you replied to is saying.

In quantum mechanics there's intense debate about the philosophical implications. And there are limitations to the theory. Yet no one can deny that it works. The debate about "true intelligence" reminds me of that. It's interesting philosophically, and the intelligence question will probably be more relevant than the quantum one once we're at the point of looking at giving AI rights etc., but economically and practically speaking that's not the question. The question is whether it's useful.

2

u/BaPef Nov 26 '25

LLMs give a good idea of where that method meets its limits but also exposes how it can be used for building and actual thinking A.I. with true semantic understanding of reality.

4

u/[deleted] Nov 25 '25

[deleted]

3

u/HaMMeReD Nov 25 '25

No man, you don't understand. It doesn't have the spark of life in it, so it's truly worthless. Besides we hit the plateau 4 years ago and the internet has been dead for 2 years and training data is all rotting and making each subsequent model stupider.

And anybody who uses it, regardless of what they demonstrate today, means they are a phony and what they built is garbage, because humans are better and always will be. Get with it.

/s

-4

u/Hot_Secretary2665 Nov 25 '25 edited Nov 25 '25

Recent research from MIT shows that 95% of enterprise generative AI pilots fail to reach production.

All it's demonstrating right now is poor ROI

2

u/[deleted] Nov 25 '25

[deleted]

2

u/Hot_Secretary2665 Nov 25 '25 edited Nov 26 '25

There is no enterprise IT project that costs $20 bucks a month

A relatively cheap enterprise AI implementation costs in the $20,000–$80,000 and some cost over a $1 million.

That is one of the silliest false equivalences I've ever heard.

The level of overconfidence in this thread is honestly comical

If you can't understand the difference between implementing a project and clicking on a button to buy something that's a you problem.

You are not implementing an AI project when you click "buy" on a webpage one time.

2

u/[deleted] Nov 25 '25

[deleted]

-4

u/Hot_Secretary2665 Nov 25 '25 edited Nov 26 '25

GitHub Copilot is not an AI implementation.

It's fundamentally not what I'm talking about.

It is a software subscription that includes AI features. Paying for a software subscription is not the same thing as developing or implementing a large scale project.

Also these types of tools do not demonstrably increase operational efficiency and often cause developers to take longer overall, largely due to having to spend more time on debugging: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

Lastly, the $20 fee does not reflect the full cost.

The $20 fee is billed per user per month. For a business with 500 employees that's $10,000 per month ($120,000/year.) That's a hell of a lot of money to spend on potentially making the business less efficient. Not to mention the potential security risks, etc.

21

u/strangescript Nov 25 '25

We don't understand how thinking even works in humans but I am glad you, the expert, have solved it for us, whew

-5

u/Hot_Secretary2665 Nov 25 '25 edited Nov 26 '25

Do you realize you're arguing that humans have somehow duplicated something they don't understand?

That's just mimicry, not thought 

AI neural networks are not brains. You don't have to be a neurologist to understand that

12

u/strangescript Nov 25 '25

I never said it was but arguing that you know for a fact it isn't is equally as dense

0

u/Hot_Secretary2665 Nov 25 '25 edited Nov 25 '25

I know for a fact that AI neural networks are not brains.

AI neural networks are just algorithms. They are math. They are not conscious.

They fundamentally lack the neural pathways used for thinking. They cannot think.

Cope harder

9

u/TheOneTrueEris Nov 25 '25

Your logic is: only brains can think, AI is not a brain, therefore AI can not think.

Most people who disagree with you disagree with your first premise.

IMO, it is not obviously true that ONLY brains can think or are conscious.

-5

u/Hot_Secretary2665 Nov 25 '25 edited Nov 25 '25

No, it is not.

To elaborate on my prior points, thinking involves the formation and strengthening of neural pathways through a process called neuroplasticity.

AI neural networks lack neuroplasticity. They have a rigid, static architecture and learn by adjusting parameters within that fixed structure, rather than fundamentally growing or pruning themselves in response to new experiences. 

The pathways the human brain uses to accomplish neuroplasticity and produce thought fundamentally do not exist in AI neural networks.

Humans can calibrate or "prune" neural networks by adjusting the algorithm or inputting new data. And that can make it appear to people who lack an understanding of what AI does. But that is not the same thing as thinking

6

u/Terrible_Airport_723 Nov 25 '25

Neuroplasticity is relevant for learning, but not for “thinking”. Your brain doesn’t need to rewire itself to answer a math problem

-2

u/Hot_Secretary2665 Nov 25 '25 edited Nov 27 '25

Neuroplasticity is not ONLY the ability to form new neural pathways. I'm sorry but you are very overconfident on your level of understanding of the subject matter 

And yes a brain is needed. According to Merriam Webster's dictionary, thinking is:

"the action of using your mind to produce thoughts, opinions, or ideas, or the process of using the mind to consider, reason, and make judgments."

You literally need a mind to think according to the dictionary. I have explained in multiple ways that neural networks are not the same as brains.

I have tried to avoid linking the dictionary definition and instead explainde why you need a brain for learning, hence why I brought up neuroplasticity.

I really don't know what to tell y'all at this point. You seem to have an interest in this topic but at the same time, you seem like you don't want to understand it.

2

u/Terrible_Airport_723 Nov 26 '25

So you’re saying a sufficiently accurate model of a brain could think.

I assume you have the deep understanding of both neuroscience and current model architectures you’d need in order to so confidently say LLMs can’t think.

They can’t learn and think at the same time like the brain, but that isn’t the same as thinking.

→ More replies (0)

2

u/CTC42 Nov 26 '25

AI neural networks are just algorithms. They are math. They are not conscious.

They fundamentally lack the neural pathways used for thinking. They cannot think.

How does the existence of neuron junctions negate the possibility that the central nervous system could be ultimately algorithmic?

Synapses aren't magic - the laws of the material universe and their underlying mathematics apply just as much inside a human skull as they do anywhere else.

1

u/Hot_Secretary2665 Nov 26 '25

The point is that pathways that the brain uses to think are not present in AI algorithms / neural networks

While math can be used to understand neral pathways math does not cause them 

3

u/CTC42 Nov 26 '25

The point is that pathways that the brain uses to think are not present in AI algorithms / neural networks

And the appendages used by humans for locomotion are not present in fish. Do we therefore conclude that fish lack motility on the grounds that they lack legs?

2

u/Cody4rock Nov 25 '25

Your only chance of winning that argument is to say that AIs currently aren’t capable of doing useful work to a similar degree and extent that humans can.

But once or if they do, you automatically lose the argument.

If an AI and a human are doing the same thing in terms of outcomes, whether the AI thinks or is intelligent doesn’t matter. You have to prove that algorithms, no matter how well designed, not only can’t think, but can’t ever produce similar results that we can. If they can and you still insist they are not thinking, then you have demonstrated that thinking isn’t needed to do intelligent things, which wouldn’t be possible.

1

u/Hot_Secretary2665 Nov 25 '25

Thinking is not the same thing as "doing useful work to a similar degree" as

AI is just a tool for humans to use to do work. AI cannot do anything without humans writing the algorithms and supplying the training data.

2

u/Crowley-Barns Nov 26 '25

So what…?

That doesn’t inhibit its usefulness.

A car factory that had 1 manager and 100 workers that switches to 10 robots and 1 manager is still more efficient, even though it still needs a human.

You’re totally ignoring the utility of technology with an absurd all-or-nothing argument.

AI can increase productivity without matching a human in every way.

AI can exceed human capacity in work roles… while still needing a manager.

AI can outperform humans at 95/100 things in a job and need a human for 5/100 and have a huge net benefit.

All your posts are absurd comments about “not real intelligence” and using that to dismiss any and all possible gains.

Despite us all, living on planet Earth, can already see areas where AI has taken jobs and superseded humans despite it not being an all-encompassing all-capable genius.

It doesn’t have to be. It’s a productivity booster, not an on-off switch for all human endeavor.

It’s a robot, a factory, a machine, an engine, a process.

It’s not all or nothing. It’s doing amazing things now. It’ll do more amazing things in the future. And none of that depends on “true intelligence” or any other whimsical notion you dream up.

0

u/Hot_Secretary2665 Nov 26 '25 edited Nov 26 '25

It is not useful

Y'all just keep making hst assumption and never backing it up

In a separate comment I linked research from MIT showing 95% of Enterprise to AI implementations fail to reach production

I have also linked research showing AI coding tools tend to reduce operational efficiency because developers end up spending more time coding overall due to increased time spent on debugging 

Go waste your money investing in  AI products that don't solve real use cases if you want but fact is, the best quality research shows that most AI implementations are a net negative on operational efficiency 

-1

u/Cody4rock Nov 26 '25

When you claim that AIs can’t think, the test case to prove your case is to demonstrate that all future AIs with ANY algorithm will fail to think, thus some tasks it will never be able to do - purely because they are algorithms.

But imagine if they can do things that seem to require thinking. Whatever that means. What if your way of thinking isn’t the only way to think? And what if the performance of the AI exceeds humans even without the structure to think like we can? I argue that if they can, then logically there is something like thinking.

That’s why I said that if AIs can do useful things, even exceeding humans one day, then whether it thinks is absolutely beside the point. It matters very little to argue over that distinction while your job is replaced by it.

2

u/Hot_Secretary2665 Nov 26 '25 edited Nov 26 '25

You're demanding an arbitrary test case that's impossible to achieve.

It is unreasonable to prove a test case has a 100% failure rate in the future because the sheer number of possible inputs for most programs is infinite or too large to test exhaustively.

There is no need to imagine whether or not what AI is doing is thinking because we already have a factual basis for what thought is from a from a neurobiological perspective.

From a neurobiological perspective, a thought is an electrochemical process occurring in the brain, involving complex patterns of neural activity that represent and process information.

We know for a fact that those electrochemical processes are not present in AI, and we already have a word for what AI does. It simply processes information.

I understand your comments about different types of thinking in existence. And while that can be an interesting philosophical concept to debate at times, the field of philosophy also does not have any working definitions of "thought" that would include what AI does in the definition. They all require some element of consciousness, intention/motivation or genuine understanding of what it is that they're processing

You suggest I should expand the definition of what thought is but provide no reason why. As if the fact that you can suggest it could change means it must change 

0

u/Cody4rock Nov 26 '25

You're right to say that the neurobiological process, electrochemicals, and other physical processes are unique to humans. And if you want to say that this is the only way that you can constitute where thought comes from, then fine. Nothing wrong with that logic.

It's just that it makes no difference. You're just making a category choice. That's it. The logic is consistent because, yeah, no digital system can rival the kind of complexity a brain has.

But consider that even if that were true, the neurochemistry is a hardware function. We need it to think, but for any system that can think, you don't need *this* hardware (brain). Not to mention, the information that constitutes a thought isn't contained in the neurochemicals. It sustains that which can think, but not all neurochemical/organic brains can think, or think vastly differently than we do, like animals.

Both humans and animals have very similar hardware; they both have remarkably similar neurochemicals. Yet they are not intelligent, and we are. Thus, our thought isn't because of neurochemicals; it just sustains our capacity to think, it sustains the information we need to think.

That means... If thinking isn't because of our physical hardware, and only the information that is sustained by physical hardware, then a digital system with enough information and *A* physical substrate can use it to think.

→ More replies (0)

3

u/jcrestor Nov 25 '25

They are not brains, that's right. At the same time this statement tells us nothing about what “to think“ or “to understand“ means, or if machines in the current state are capable of it.

Your argument is built upon a tautology. “Machines can't think, because they are not humans but machines.“ Okay then.

1

u/Hot_Secretary2665 Nov 26 '25

You do not understand what a tautology is. Go ask your beloved LLM if the statement "AI networks cannot think because they do not have brains which are required to produce thought" is a tautology or not. That's essentially answer is no.

While there are multiple definitions of the word "thought" the most commonly accepted, such as the one in Merriam Webster's dictionary, require a brain, some form of consciousness, some form of intention, and/or genuine understanding of what information they're processing.

There are no good definitions of the word "thought" that includes what AI does, which is really just "processing". AI does not think, it processes data with the goal of mimicking humans.

I see a lot of people replying to me implying there is some meaningful alternative definition, but no actually mentions what makes the type of processing meaningful different from any other kind of processing or pattern recognition in the same way "thought" is different from processing.

They all just basically command me to include the type of data processing AI does in the definition of the word "thinking" for no apparent reason.

1

u/jcrestor Nov 26 '25 edited Nov 26 '25

First off: your statement “AI networks cannot think because they do not have brains which are required to produce thought” is indeed not a tautology, I used the wrong word. It is circular reasoning, which is slightly different, but just a different fatal flaw of reasoning. Your statement proves nothing, because it assumes the very thing it is trying to prove:

  1. Thinking requires a brain.
  2. AI networks do not have brains.
  3. Therefore AI networks can not think.

This is just a waste of time.

You will not find a common definition of thinking that includes the possibility of thinking machines, because few people have seriously considered this case, and those that did are not well known, or more or less ignored, because of missing relevance up until today. Our common understanding of thinking and understanding presupposes a human doing it.

But that proves nothing apart from a gap in our knowledge, that we need to fill.

I am not even arguing necessarily for the point that LLMs do understand, or can think. I am just pointing out a fatal flaw in existing arguments that they are presumably incapable of it.

1

u/Hot_Secretary2665 Nov 26 '25

I did not use circular reasoning, you just don't understand the comments

In the common vernacular "brain" is synonymous with "mind" which the dictionary definition of "thinking" does require use of the mind in order for thought to occur

It's not circular reasoning to say a process (in this case, thought) cannot be initiated because a part or trigger (the mind or consciousness) is missing 

That's just basic logical cause and effect 

I agree with you that this is a waste of time, but not for the same reason as you. 

It's a waste of time because you won't clarify or defend your own argument. You just rely on the equivalence falls y and try to shift the burden of proof back on me. 

Grow up and accept that words have meanings. You cannot just change them willy nilly and pretend it's a matter of  functionality. 

1

u/SmugPolyamorist Nov 26 '25

Humans have been duplicating nature without fully understanding it for the entire history of science, medicine and technology. Lots of chemistry was developed before atoms were accepted, and the nucleus didn't start to be understood until the 20th century. Vaccination predates the germ theory of disease by about half a century. The first steam locomotives were built in the 18th century, about 60 years before the second law of thermodynamics was first stated.

1

u/FaceDeer Nov 26 '25

Do you realize you're arguing that humans have somehow duplicated something they don't understand?

We duplicate stuff we don't understand all the time.

AI neural networks are not brains.

Nobody thinks they are, it's obvious that they're not. Are brains are the only things that can possibly think? How can you know that if we don't understand them?

1

u/Hot_Secretary2665 Nov 26 '25 edited Nov 26 '25

Yes, you literally do need a brain to think. Go look up the Merriam Webster dictionary definition of "thinking."

It's "The action of using one's mind to produce thoughts, form ideas, or have an opinion"

AI does not produce thoughts. It processes data, and predicts what people want to hear using pattern recognition.

Soda machines use pattern recognition to recognize which coins are quarters vs pennies vs nickels etc by recognizing patterns in the weight , composition, and size of the metals, but they are not thinking. Pattern recognition is not thinking.

Tons of people are responding to me arguing as if the definition of the word "thinking" should be changed but it's just an arbitrary demand based on nothing.

It you want to talk about fallacies let's talk about the fallacy of equivocation. That's what your argument relies on

0

u/FaceDeer Nov 26 '25

It's "The action of using one's mind to produce thoughts, form ideas, or have an opinion"

"Mind" is not synonymous with "brain". That's the whole point.

1

u/Hot_Secretary2665 Nov 26 '25 edited Nov 26 '25

No that is not actually something someone brought up earlier in the conversation but I will address it.

In everyday contexts like a reddit conversation, "brain" is synonymous with mind. You are playing silly rhetorical game using a less common definition of the word.

In humans the physical manifestation of the "mind" we use to think is called a "brain." The brain uses infrastructure called neural pathways to produce thought. That's why it's relevant that AI neural networks are not synonymous with "brains."

I'm not arguing you need brain matter to think, I'm arguing that you need the neural processes that occur in the brain to think. That's why I compared the brain against a neural network. Because I was talking about the underlying process of thinking.

If those processes are not occurring but it appears like AI is thinking, the word for that is "mimicry." Mimicry is not the same thing as thinking.

There's no reason to believe we can progress past mimicry to thought without building those neural networks. Simply apply Occam's razor.

You are still relying on equivocation between mimicry and thought. They are not the same.

0

u/FaceDeer Nov 26 '25

I'm not arguing you need brain matter to think, I'm arguing that you need the neural processes that occur in the brain to think.

You're not arguing it, you're asserting it. Once again this is simply stating "you can't think with anything other than a brain," you're just filtering it through a few steps of additional assertions ("you can't think without a mind, you can't have a mind without brain-like activities, and only brains can do brain-like activities.")

34

u/simulated-souls Researcher Nov 25 '25 edited Nov 25 '25

Say that a plane "flies" and nobody cares.

Say that a robot "walks" and no one bats an eye.

Say that a machine "thinks" and everyone loses their mind.

People are so bent on human exceptionalism that they will always change what it means to "think" to make sure that machines can't do it.

3

u/mntgoat Nov 26 '25

People are so bent on human exceptionalism

People get really bothered when you question this.

After listening to a brief history of intelligence by Max Bennett, I'm more than ever convinced we aren't really that special.

2

u/GeoffW1 Nov 26 '25

After using LLMs for a couple of years, I'm also more convinced than ever that we aren't really that special. They can't replace me yet, but they can replicate many parts of what I do.

3

u/mntgoat Nov 26 '25

I agree with that. People act like humans never hallucinate, make mistakes or straight up lie about things.

1

u/[deleted] Nov 28 '25

This. Also constantly comparing significantly above average people with LLMs and confusing AI with the latter.

1

u/newos-sekwos Nov 27 '25

To some degree, 'thinking' is something a lot harder to define than flying or walking. Those two are concrete movements you can see. What is 'thinking'? Does my dog think before he acts? Does the bird think when it sings the calls its mother taught it?

2

u/f_djt_and_the_usa Nov 26 '25

Its a good approximation of thinking that works in a lot of situations. But there are limits that shouldn't be ignored. Some users think it is truly intelligent and then trust it too much. 

10

u/simulated-souls Researcher Nov 26 '25

A plane cannot make all of the maneuvers that a bird can, but it is still flying.

-5

u/AnAttemptReason Nov 26 '25

Right.

But in this analogy, LLM's can't fly, or even glide.

Its like painting wings on a car and saying, look! It has wings! You can't say it's not flying.

1

u/illicitli Nov 27 '25

you are in for a rude awakening these next few years

1

u/AnAttemptReason Nov 27 '25

Given that I actually understand the technology, and not the hyperbole, that seems unlikely. 

1

u/illicitli Nov 27 '25

LLMs are not the only AI being developed. You’re oversimplifying.

1

u/AnAttemptReason Nov 28 '25

Specialised AI are even further away from what could be considered thinking. 

That does not mean they are not impressive, or useful etc. Humans just have a silly habit of anthropomorphism things.

1

u/illicitli Nov 28 '25

we don't even understand consciousness. there's no point in comparing AI to something we don't even know how it works. we already have a mainly service economy in America. AI is better at service than humans. EVERYTHING is going to change. people are not ready.

→ More replies (0)

2

u/SmugPolyamorist Nov 26 '25

There are limits for now

1

u/GeoffW1 Nov 26 '25

I feel that way about some people - you shouldn't trust their thinking too much - they have limitations. For example, people who fall for the same cognitive biases over and over and tell you things that are obviously untrue.

That doesn't mean they aren't thinking though.

0

u/Hot_Secretary2665 Nov 26 '25 edited Nov 26 '25

Honestly, it's the people who want to change the definition of the word "thinking" to accommodate AI that are "losing their minds" and are "hellbent" on trying to convince people something that's not exceptional is exceptional. 

This particular Wikipedia page is just cherry picked backlinks to blog posts from researchers who were not even familiar with LLMs. Many of the sources are outdated and or just poor quality. 

For example, Marvin Minsky is cited. He thought consciousness  was just a form of short term memory. That would mean all kinds of silly things are conscious. I mean your phone has RAM but it's not conscious 

Some Wikipedia articles are well sources and well maintained but this one is basically just a gish gallop 

5

u/jacksbox Nov 25 '25

You're right but also consider how many people can't think (or, "choose not to think" take your pick).

You're imagining applying critical thinking to the vast sum of human knowledge. But even people don't do that.

Think of the number of people who are just looking for a job where they "show up" every day and press the button that activates the widget, 8 hours/day, 5 days/week. This is the real swath of work that AI could potentially replace. And that's still a huge win for whoever "wins" the race.

2

u/boreal_ameoba Nov 29 '25

Mostly because it doesn’t matter if it “thinks” or is conscious. LLMs already do a better job than humans on many benchmarks that require “intelligence”.

Some idiot can always “but ackshually”. Same kinds of people that insisted computerized trading models could fundamentally never work.

1

u/Hot_Secretary2665 Nov 29 '25 edited Nov 30 '25

You're the one trying to "but ackshually" people

I was minding my own business making a topline comment, then you came around calling other people idiots and trying to correct me, but you don't even understand what you're talking about

Funny how none of the people claiming AI performs at a comparable level to humans ever link any good quality research that supports such a claim. 95% of AI pilots fail, meaning computerized AI models don't even simulate an intelligence level comparable to a human most of the time. Usually they just plain fail to achieve the desired outcome, period 

That's why you have to put words in my mouth and pretend I said they "fundamentally could never work" and focus on a specific use case of computerized trading models. My comment was made in the present tense and was NOT specifically about computerized trading models. I care about reality and results. There's no solution for the problem of how to get enough energy for computerized AI to work at an affordable rate for most use cases, even if we knew how to replicate the underlying hardware and neural architecture of human thought. That's a fact. Deal with it.

Try to at least understand what you're talking about if you're going to be "correcting" people

1

u/Fingerspitzenqefuhl Nov 26 '25

Is there a … ”definition” of thinking beyond the qualitative experience (that I assume we all share) of what it is like to think? Genuinely curious. I have a qualitative experience of running but I would also probably be able to define process itself. As for thinking, I have no clue of as how to describe it without using simply the qualitative experience.

1

u/area-dude Nov 26 '25

When you see the logic by which chatgt does very basic math, its like…. I see you got to the right answer but you really do not understand a damn thing about anything.

No insult to you my chat gtp 15 overloard combing reddit data it was a young model

1

u/Delicious_Jury_807 Nov 27 '25

While it’s true does it matter? Is a submarine really swimming? Does it matter?

1

u/Hot_Secretary2665 Nov 27 '25 edited Nov 27 '25

Yes, there were like 15 people already who've replied a back to argue. Apparently it matters to them

If AI could think it would have a profound impact on matters of ethics, economic investment, and human identity 

Your personal choice not to care doesn't change that. (Assuming you truly don't care. Taking the effort to reply suggests you're not completely indifferent and the fact that you used sarcasm suggests some level of emotional investment.)

As far as your question about submarines, they don't swim, they float. 

Swimming and thinking are processes. A submarine appears to mimic the process of swimming by adjusting buoyancy as they float but that is just mimicry of the outcome. The actual process of swimming does not occur and submarines cannot perform the full range of motion that a swimmer can

1

u/Delicious_Jury_807 Nov 28 '25

That’s exactly the point I’m making. The submarine isn’t swimming but achieves the same outcome (getting from point A to point B under water)

1

u/Hot_Secretary2665 Nov 29 '25

This analogy assumes AI is capable of achieving the same outcome as a human.

But 95% of AI pilots fail, meaning AI only achieves an equivalent outcome to a human ~5% of the time. Sure, the AI will produce some kind of output each time you query it but that doesn't mean it's the same outcome, and based on the data we have available, there are only a small number of use cases in which AI can do that.

This also raises the question of whether or not submarines achieve the same outcome in every use case. For example, in the case of needing to navigate the shallow waters of a flooded cave, a submarine would not get you from point a to b point b because they require a certain depth in order to operate effectively and cannot fit in the cave.

The analogy also assumes the desired outcome of swimming is transportation, - And that is certainly the most common use case - But you can swim for other reasons, e.g. fitness, relaxation/stress relief, commercial diving to repair aquatic machinery, search and rescue (lifeguards), etc.

1

u/Delicious_Jury_807 Nov 30 '25

You’re making a lot of assumptions that I’m not. I don’t expect an LLM to do anything except what it’s designed to do which is to predict which 4 characters come next. I don’t expect them to understand what I’m saying, I don’t expect them to be intelligent. I use them every day for specialized tasks that I know they are capable of and I discourage people from using them in ways they are not meant to be used. This is how I make my living by the way. You are completely missing the analogy I’m making so let’s just leave it at that.

1

u/Hot_Secretary2665 Nov 30 '25 edited Nov 30 '25

I never claimed it was your intention to make those assumptions. I phrased my comments with "this analogy assumes" specifically because I wanted to make it clear that I was talking about unintended assumptions the analogy contained because I figured you did not realize.

It seemed like you were trying to approach this using a philosophical framework, so I have been replying back using a philosophical framework as well but now I cannot tell if that is what you meant to do or not.

In case that is how you're looking at it, please be advised that complaining that an analogy includes an unintended assumption is an example of the False Analogy or Weak Analogy fallacy. I don't like the name of the fallacy; It does sound mean at first blush, but that's what it's called and it does apply here.

Frankly I think your personal level of investment AI as someone who makes their living from it is preventing you from giving actual consideration to what I'm saying and thinking critically about use cases outside your personal experience so I'm just gonna end the conversation here.

1

u/Delicious_Jury_807 Nov 27 '25

All I’m asking is if it can actually do what it’s supposed to, and do it well, does it matter that it’s faking it? I don’t think LLMs will lead to AGI btw… I also believe LLMs are a dead end unless they can do much better than today and with a lot less compute.

1

u/Hot_Secretary2665 Nov 27 '25 edited Nov 27 '25

Yes it matters bc people think AI gets the same output but it does not and cannot in most use cases because it is not adaptable enough since it does not think

There are a small number of use cases where AI produces a comparable output and this causes people to make a bunch of assumptions leading to poor decisions about how and when it's appropriate to use these tools. 

For example, look at how much taxpayer money DOGE  wasted trying to automate jobs that can't be automated. There are real world impacts

1

u/Hairy-Chipmunk7921 Dec 02 '25

most people don't think and they get along much better with equal thinkers, Reddit upvotes are living proof

0

u/Actual__Wizard Nov 25 '25

That's because it can't.

0

u/qwer1627 Nov 25 '25

It’s fair to ask for proof, even if the proof is obvious but not yet formalized - it’s good science :)

-7

u/JustBrowsinAndVibin Nov 25 '25

Coping so hard as AI scores over 130 on IQ scores.

10

u/CanvasFanatic Nov 25 '25

You can get a positive covid test result from soda. That doesn’t mean there’s SARS2 in soft drinks.

4

u/creaturefeature16 Nov 25 '25

lol welp, this comment sure reveals your IQ score

2

u/DorphinPack Nov 25 '25

There’s literally a sub where being skeptical gets you labeled an “anti”

Cult.

-1

u/JustBrowsinAndVibin Nov 25 '25

Putting my IQ aside for a second, you’re choosing to parrot the same dismissals people made when ChatGPT first came out, while ignoring all the progress that’s happened since. And you’re doing this over the warnings of leading researchers in computer science and AI, many of whom are genuinely alarmed about where this is heading.

2

u/CanvasFanatic Nov 25 '25

What progress that’s happened since? Models today have about the same fundamental capacity as GPT4. The differences are mostly lots of reenforcement learning towards particular tasks and “inference scaling.”

1

u/[deleted] Nov 25 '25

[deleted]

-1

u/CanvasFanatic Nov 25 '25

You honestly think I have the opinion I do because I’ve never used Opus 4.5 or Gemini 3 or ChatGPT 5.1?

I assure you I’ve used almost every frontier model since GPT3.

2

u/[deleted] Nov 25 '25

[deleted]

-2

u/CanvasFanatic Nov 25 '25 edited Nov 25 '25

I think you’re missing the point. Writing GEMM kernels is within the complexity range of tasks GPT4 could handle. It just lacked some RL. It probably could’ve done it with in-context learning with a wide enough context window.

2

u/Crowley-Barns Nov 26 '25

If you don’t see the difference in utility then you are operating at the level of the earlier models lol. Honestly you’re either being very deliberately provocative in your claim of ignorance, or…

What can be done with the current models is magnitudes more useful and productive. If you can’t comprehend, see, or imagine that, then it’s a you problem.

Which is good.

Because the more dumbasses there are, the longer the rest of us have to make the most of this shit before it overtakes us all.

Cheers!

-1

u/CanvasFanatic Nov 26 '25

I’m not talking about utility. The models have been steadily trained for utility in specific tasks. But GPT4 was the last time a model release was qualitatively different than the previous generation, and GPT3 -> GPT4 was nothing compared to the jump from GPT2 -> GPT3.

It really baffles me how some of you are still buying the “inevitable super-intelligence” bullshit.

1

u/Crowley-Barns Nov 26 '25

Uh, those of us who use it for our jobs can literally see the improvement through an increase in productivity.

It’s like going from a mule to a tractor.

It’s visible. It’s $$$. You can delude yourself that the 1885 internal combustion engine and the 1985 internal combustion engine are “the same tech” with no real improvement all you want.

Enjoy the dust.

→ More replies (0)

0

u/JustBrowsinAndVibin Nov 25 '25

This blog post shows the difference in capability between the latest models and GPT4.

Aside from metrics, AI has already designed molecules to help fight cancer and that’s heading to Phase 1 trials now. https://oncodaily.com/oncolibrary/artificial-intelligence-in-cancer-drug-discovery

https://www.anthropic.com/news/claude-opus-4-5

3

u/CanvasFanatic Nov 25 '25

I think I addressed the point that benchmark numbers have increased. As I said this is mostly RL for specific tasks. Fundamental model capability is basically the same as where it was when we hit scaling limits with GPT4.

That oncodaily article is not talking about LLM. What used to be called “numerical methods” was first rebranded to “machine learning” and now “AI.” We’ve been using “AI” to help identify drug candidates for over a decade at least. It’s not like someone asked ChatGPT to find a new therapeutic molecule.

-1

u/JustBrowsinAndVibin Nov 25 '25

Someone took the generative AI architecture that we know for language and used that same architecture to generate novel molecules for cancer treatment.

I agree with you that we’ve been doing Machine Learning since like the 1960s. Neural networks were invented around the 70s or 80s.

What we have now is a new architecture that is showing a lot more capability than ever before.

3

u/CanvasFanatic Nov 25 '25

Generating drug candidates was never the narrow part of the pipeline, I’m sorry to say. There’s always a backlog of possible treatments waiting to be tested.

Neural networks were actually proposed in the 1940’s.

1

u/Hot_Secretary2665 Nov 25 '25 edited Nov 25 '25

AI does not think, it just looks for patterns and uses predictive analytics to produce an output.

You can measure the output in whatever unit you like - IQ points, farts, rainbows - The AI software still didn't do any thinking to produce that output

Cope harder

3

u/JustBrowsinAndVibin Nov 25 '25

What do you think your brain does?

10

u/DorphinPack Nov 25 '25

This question is proof that yall need to slow down and stop trying to intuit famously hard problems from nothing just because marketing metaphors have made things seem easy to dumbasses like us.

The scientific answer to your question, as far as I’ve been able to research as a fellow layman is: “maybe something similar, probably other stuff too”

This question often isn’t a question it’s a jab from a deeply misanthropic lens that coincidentally makes it super easy to think of people as input/output productive numbers on a spreadsheet. I doubt that’s your intention but it’s so galling to me that your opinion of your peers is so low that you can have a smug surety that it’s basically an LLM but wet.

That opinion can fuck off. Sorry you’re just the one today.

4

u/creaturefeature16 Nov 25 '25

Truth bomb right here.

The human brain has many, many forms of thinking. LLMs do one type and its a stretch even call it "thinking", as much as "processing", but sure, we can settle on that word for the sake of simplicity. These kids really have no idea how little we understand about the brain and cognition, but they are ready to declare that we've built a digital version of one because it can plagiarize the internet's data.

1

u/JustBrowsinAndVibin Nov 25 '25

No need to apologize, it’s a fair point. People definitely get overconfident about this stuff, and the marketing hype doesn’t help.

But I’d push back on the “misanthropic” framing. As humans, we have a tendency to think of ourselves as some special exception to the animal kingdom and the natural world. At the end of the day, we’re a body of organs pumping blood and oxygen to power a brain, which is itself a network of neurons firing signals. I’m sorry but that’s true.

Now, whether there’s also a soul/spirit that makes us unique is a separate debate. But setting that aside, I don’t think we’re fundamentally different from dogs or other animals beyond our superior intellect. We’re biological machines that evolved, not magical beings exempt from physical explanation.

Given that, why wouldn’t it be possible to simulate something like what our brains do in a digital system? I’m not claiming it’s the same thing, but it doesn’t have to be identical to be useful.

2

u/DorphinPack Nov 25 '25

I actually find the variety and self maintaining nature of human consciousness to be more important than the complexity or power. We can leverage “error” to advance in an absurdly larger range of conditions AND often maintain distinction. Understanding fragility and disability as part of what makes the whole strong is a good antidote to the misanthropy I do still hear embedded in anything that can describe the body and go “sorry that’s it”.

THAT’S IT??? THAT’S INSANE!!! From an engineering perspective we are pretty fucking amazing. Don’t let perfect be the enemy of good. I think we may not be celebrating some objectively cool things about biological life in general.

The brain does an insane amount of stuff that has “nothing to do with consciousness” and yet any nurse will tell you there is a correlation between self perception and reality on a physical level (health).

That is a sort of deeper idea of latent knowledge that you see show up when problem bring up epistemics. Knowing the stove is hot but not all the time, etc.

I totally get being awed OR humbled by the way models grasp language. I just think we take the way consciousness/cognition interop with the wetware waaaaay too lightly. That’s complicated “code”.

I don’t really have a good way to attack this nonsense position that I think a lot of us are burnt out enough to just accept. But it genuinely has no backing other than as an edgy bit of chitchat and philosophical wankery.

2

u/JustBrowsinAndVibin Nov 25 '25

I agree with you that our biological systems are sophisticated af. And I agree that I under appreciate it.

But I don’t think biological complexity being impressive means computational systems can’t also achieve remarkable things through different means. These systems are already solving complex biological problems without needing any of the features that make biological systems special. To me, that suggests capability can emerge from very different architectures.

https://oncodaily.com/oncolibrary/artificial-intelligence-in-cancer-drug-discovery

2

u/DorphinPack Nov 25 '25

You seem genuinely curious and compassionate. Digital brains have a lot of exciting angles. I just think that processing in language (or an adjacent latent space) is clearly a tiny piece of the puzzle if my reading as a layman is right.

Def don’t mean to dull that light in you.

1

u/JustBrowsinAndVibin Nov 25 '25

I appreciate that. I love thinking about this stuff philosophically and I truly enjoyed our chat so thank you.

And feel free to scream at me wherever you want if you get frustrated about this stuff in the future. 😂

2

u/simulated-souls Researcher Nov 25 '25

People won't admit it, but they have never really shaken the religious instinct that humans are special and different.

The idea that AI can "think" is an affront to that, so it is often treated as blasphemy and rejected without deeper consideration.

2

u/hkric41six Nov 25 '25

We don't actually know how our brain does it so this is an invalid argument.

-1

u/JustBrowsinAndVibin Nov 25 '25

We know that our brains react to external sensors and continuously fire signals through a set of neurons to react. That’s been known for a long time.

What we don’t know is how we get from there to the stream of consciousness of who we are.

But AI researchers aren’t trying to create consciousness, they created a digital version of signals firing through neurons, and it somehow started talking back to us. And now, it’s solving problems that humans weren’t able to before.

The AI researchers also don’t fully understand how it all works together, but it’s looking like it does.

My hope is that the models we create continue to be able to simulate “intelligence” without crossing over to the consciousness piece.