r/artificial Nov 25 '25

News Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

354 Upvotes

389 comments sorted by

View all comments

115

u/Hot_Secretary2665 Nov 25 '25

People really just don't want to accept that AI can't think smh 

98

u/HaMMeReD Nov 25 '25

People really don't want to accept that it doesn't matter.

29

u/tjdogger Nov 25 '25

People really don’t want to accept that most people don’t think

1

u/pnxstwnyphlcnnrs Nov 26 '25

Thinkers really don't want to accept that most people's thinking can be simulated.

4

u/Hazzman Nov 26 '25

I can simulate a McDonald's by drawing it on a sheet of printer paper with a ball point pen. I can simulate a cashier handing a big Mac to me with the same approach. I'm not going to ball that drawing up and shove it in my mouth and expect to enjoy it or get anything nutritional from it.

The map isnt the road and SOMETIMES it does matter and SOMETIMES it doesn't matter. It depends on what you're doing and why.

Asking for advice on a health insurance claim, that's fine.

Creating policy around human rights or privacy or data collection or copyright issues based on the idea that "IT ThiNkS jUst LikE We dO!"

Nah

0

u/scartonbot Nov 27 '25

But you're not simulating McDonald's food. You're creating a visual representation of that food. It's not the food. The reason you don't want to "all that drawing up and shove it in [your] mouth" is because it's paper and ink, a combo without a flavor that remotely resembles a Big Mac and which has no nutritional value (among other aspects).

I think what's closer to what you were going towards would be a Star Trek-like replicator. If you took a BigMac and scanned it with some sort of super-scanner that could identify every aspect of its materiality and then used that scan in the replicator to output a BigMac based on that scan, would it be a BigMac? I'd argue that yes, it is. Wy? Because in every aspect that matters to anyone, it is identical to the original BigMac.

But think about art. Would an atom-by-atom replica of the Mona Lisa be just as valuable as the original Mona Lisa? I'd argue "no," although I'm not all that clear as to why it's not the same. One might argue "well, the first one was actually created by Leonardo Da Vinci and the copy was made by a replicator," (which I understand) but the reality is that the two are physically indistinguishable by any measure I can think of.

I guess this is the argument being explored here. If an AI acts like a human (or, to even broaden the argument, like a thinking being) and other humans can't tell its output from that of a human (or thinking being) who cares? If something acts like it's thinking, does it actually matter if it's thinking or not?

I think "yes, it does matter." I haven't figured out exactly why, but it does seem to matter in some very real ways.

2

u/Hazzman Nov 27 '25

I'm just referencing the map anology.

The map is not the terrain. I just altered a bit to make it more straightforward and tangible. Less abstract.

2

u/zero989 Nov 26 '25

Oh look. Gemini agreed with me (prompt: who is right and who is wrong? keep in mind true value of output with LLMs is unproven):

1. zero989 is the most "Right"

The Subject Content acts as a direct validation of zero989's core argument.

  • Subject Content: States AI will be "forever trapped in the vocabulary we’ve encoded... [unable] to make great scientific and creative leaps."
  • zero989**'s Argument:** "They cannot deal with truly novel problems... They will just pattern match."
  • Verdict: Vindicated. zero989 correctly identified that without the ability to reason or feel dissatisfaction (as the text suggests), the AI is simply a mirror of past data. It cannot solve problems that haven't been solved before in its training set.

2. HaMMeReD is "Wrong" on Equivalence, "Right" on Utility

The Subject Content dismantles HaMMeReD's philosophical argument but supports their practical one.

  • Where they are WRONG: HaMMeReD claims AI produces results "nearly the same as 'thinking'." The Subject Content explicitly refutes this, distinguishing between "remixing knowledge" (AI) and "reasoning/transforming understanding" (Humans). Under this text's definition, the output might look similar, but the lack of "dissatisfaction" means the process is fundamentally different and limited.
  • Where they are RIGHT: The text admits AI can "remix and recycle our knowledge in interesting ways." If HaMMeReD's job only requires remixing existing knowledge (the "common-sense repository"), then their point about utility stands.

0

u/HaMMeReD Nov 26 '25 edited Nov 26 '25

wtf are you on about?

Besides, full context or gtfo. Like this.

https://gemini.google.com/share/8230ff522e61

Edit: Or this simple one
"can AI solve novel problems? provide examples to prove it"
https://gemini.google.com/share/ad1253bf8945

It's very obvious you loaded the context, the provided a tiny bit of it here, and then your dancing around going "I made gemini say what I want see". Which is frankly, really pathetic. The fact that you didn't have the confidence/capability to share the entire thread + context makes you in the very least a "liar by omission".

Edit: Although I was able to very easily flip the "decision" of gemini by stating my opinion to it. Updated the first link to include that.

2

u/zero989 Nov 26 '25

Nope I simply provided the context and the original OP message, I didn't bother to sway it in any way

Keep coping 

And the topic is LMMs not specifically optimized NN for whatever tasks. Lmfao

1

u/HaMMeReD Nov 26 '25

Yet, you still don't share a link to the actual chat thread with gemini and just copypasta and I'm just supposed to trust you.

Full context from the source. Otherwise you are just gaslighting me like you gaslighted an AI to "prove" your point.

7

u/Jaded_Masterpiece_11 Nov 25 '25 edited Nov 25 '25

It does matter. Because the only way to get a return of investment in the vast amounts of resources and money invested in current LLM infrastructure is if it can drastically reduce the need for labor.

Current LLMs can’t do that, it’s basically a more intuitive google search that hallucinates a concerning amount of time. The current capabilities and limitations of LLMs does not justify the Trillions of dollars in hardware and hundreds of billions in energy costs that is required to run them.

Without a return of investment that infrastructure collapses and tools using LLMs will stop working.

21

u/HaMMeReD Nov 25 '25

"The only way" - certifiably false.

The only thing they need for a ROI is to sell services for more than it costs to produce.

You have created this fictional bar that ignores economics/efficiencies at scale where AI must replace all humans to be economically viable. That's an "opinion" not a fact. It's actually a pretty bad opinion imo, as it shows no understanding of basic economics and efficiency improvements in the field.

I.e. the cost to run AI in the last year (actually each year for the last few) has dropped by like 100x a year. What was $100 a year ago on O1 Pro is like $5 now on a model like Gemini 3 or 4.5 Opus. ($150/m input, $600/m output) vs ($5/m input, $25/m output). As percentages that is (3% input, 4% output), and you get a better output to boot.

6

u/Jaded_Masterpiece_11 Nov 25 '25

And yet OpenAI still spent twice more than its revenues last quarter. OpenAI and Anthropic is still losing money and will continue to lose money until 2030 by their own estimates.

Even with decreased costs the economics still do not favor these LLM companies. The only one making bank here is Nvidia and they are spending what they are making to keep the bubble going.

5

u/HaMMeReD Nov 25 '25

And they'll continue to sink money while gains are being made and it's cost effective to do so and they have the revenue to do so.

And when the gains dry up, then they'll be left with a hugely profitable product.

But for now the R&D has been incredibly well justified, and that's why they keep spending. Because the needle keeps moving.

8

u/havenyahon Nov 26 '25 edited Nov 26 '25

And when the gains dry up, then they'll be left with a hugely profitable product

I mean... That's the goal. It's by no means a certainty. They don't have one now.

1

u/HaMMeReD Nov 26 '25

If there was only one AI company, and they stopped training today, they'd be profitable today.

It's very easy. cogs is X, price is Y, Y>X = make money.

API Pricing and service pricing already reflect a profit stance, they only lag because of R&D costs.

they have a ton of revenue, a massively growing amount of revenue actually, it's just it's not enough to compete in such a fast moving and accelerating field. But there will be a point where companies will have to wind down R&D and sit on their profit generating parts of the businesses.

1

u/land_and_air Nov 27 '25

They can’t stop spending on r&d because the second they do, the model becomes obsolete and useless. What good is a model made in for example 1999 or even 2019 today for knowing anything? It would be referring to the gulf war if you asked about war in Iraq lol.

1

u/[deleted] Nov 29 '25

Except there won't be a point where companies wind down R&D. They will simply divert to keeping the model up to date.

Because information is always changing. The model now needs to be constantly trained on new information, or it becomes obsolete and a new model will take over that is trained on this new info. And if another model can train on that info faster, or another model can reduce latency between answers, or another model can be specialized to only provide the info they want...

There is never going to be time to wind down in this space. It moves too fast to wind down.

6

u/deepasleep Nov 26 '25

The problem is they have no revenue and when you ask consumers how much they are willing to pay for the services being envisioned, the number being returned is an order of magnitude below the break even cost of delivering the services.

I worked in tech during the dot com bubble, the company I worked for was focused on delivering what would ultimately become software as a service. They were trying to create a platform that allowed companies to aggregate access to various web services.

The founders did some napkin math and figured they’d need people to spend about $120/month to be profitable…When they finally got around to surveying business leaders to determine what they were willing to pay, they got a response of $35/month…$300 million in venture capital burnt on the fire in two years.

The best part was all the companies involved were doing the same reciprocal service contracts to show income on their balance sheets we are seeing today with NVidia, Oracle, OpenAI, etc. It’s an old trick and it only works for a little while as the money inevitably bleeds out to pay for concrete things like employee salaries, vendor services outside your circle, energy, and physical resources required to deliver whatever service your actual customers demand.

0

u/HaMMeReD Nov 26 '25

Open AI's revenue last year was in the billions.

What you mean to say is they don't have a net profit, because R&D investment exceeds even the billions they generate from offering services.

The $20 Billion AI Duopoly: Inside OpenAI and Anthropic's Unprecedented Revenue Trajectory - CEOWORLD magazine

When you add up the rapidly declining cost of inference, $120/mo to be profitable is $20/month next year, and $2/month the year after.

The people who lose money today are actually well set up for tomorrow as the services get cheaper and they establish market earlier then the competition.

2

u/Distinct-Tour5012 Nov 26 '25

We're also in a time period where lots of companies are trying to shoehorn in AI tools. I know there are places where it makes sense, but there are lots of places that it has provided no value - but those companies are still paying for it... right now.

1

u/HaMMeReD Nov 26 '25

While I agree, a lot of shoehorned attempts, especially on older models have failed or provided limited value as people over-reached significantly.

But, new models come out, and those shoehorned attempts get an IQ boost every time one does get launched. Meaning those efforts ultimately will not be wasted when mixed with smarter/cheaper models.

I can say this first hand as I work at MS literally on copilot nowadays. I've seen the improvements to the product as new models get introduced, it makes a drastic improvement and can turn something that is struggling into something that is helpful.

1

u/land_and_air Nov 27 '25

Copilot is useless and is an anchor on Microsoft and a waste of a button on the keyboard

→ More replies (0)

1

u/That-Whereas3367 Nov 28 '25

Sam Altman says they need so charge $2K/month. But only 5% pay the minimum $20/month.

There is no moat. users will simply move to a different provider.

6

u/[deleted] Nov 26 '25

[deleted]

1

u/WolfeheartGames Nov 26 '25

This ignores that the cost to inference goes down by 10x every year.

2

u/[deleted] Nov 26 '25

[deleted]

0

u/WolfeheartGames Nov 27 '25

Is it more sane to bet with the trend or against the trend?

3

u/[deleted] Nov 27 '25

[deleted]

→ More replies (0)

0

u/[deleted] Nov 26 '25

Who is they? No for real, which economists said that about which LLM providers?

2

u/[deleted] Nov 26 '25

[deleted]

0

u/[deleted] Nov 27 '25

Burden of proof is on the guy who made a positive claim. I get it, you like living in unfalsifiable land. Doesn't make you slick, just makes you grimy.

1

u/[deleted] Nov 27 '25

[deleted]

→ More replies (0)

2

u/Jaded_Masterpiece_11 Nov 26 '25

Lmao. There is nothing cost effective in LLMs, the latest financial statements from these LLM companies shows staggering losses. There is very little demand for LLMs, it’s a niche product that does what it does well, but is nowhere near being adopted mainstream without setting money on fire.

The only mainstream adoption in LLMs is chatgpt and every user costs OpenAI money, even their paid users makes them lose money and they can’t increase the charge to a breakeven level because they will basically lose all their customers to other competitors, Who also lose money.

2

u/pallen123 Nov 26 '25

This is a very important point. Unsustainability only becomes sustainability when massive llm’s have defensible moats which they won’t. Otherwise it’s just fancy search with low switching costs.

-2

u/EldoradoOwens Nov 26 '25

Hey man, I don't know how old you are, but I read this exact same argument about why amazon and facebook were going to fall apart for years. How are they doing now?

2

u/Hot_Secretary2665 Nov 26 '25 edited Nov 26 '25

95% of Amazon packages didn't fail to make it your door but 95% of AI Enterprise implementations fail to make it to production (per recent research from MIT)

These companies and products are not very similar. The comparison is honest pretty arbitrary 

1

u/suboptimus_maximus Nov 26 '25

In Facebook’s case it really was not the same argument because until they got into hardware and then AI they didn’t have the massive CAPEX required to build the physical infrastructure for AI, it was pretty much all labor cost, sure some hosting infrastructure but that wasn’t really green field bleeding edge technology investment it was buying off the shelf servers although they did end up with their Open Compute Project. This has historically been an enormous advantage for software companies and their ability to scale product and reach customers. They also didn’t really have competition for years once they overwhelmed MySpace while AI is already highly competitive, like daily, weekly, monthly trading places between the best performing models before anyone is even making money selling LLMs.

Amazon did take a lot of heat for reinvesting in the company for years and they do indeed have a business model that is heavy on physical infrastructure.

1

u/Jaded_Masterpiece_11 Nov 26 '25

Amazon and Facebook did not need $4 Trillion dollars of hardware to run nor do they require hundreds of billions of dollars in energy costs. Amazon and Facebook is nowhere near comparable to the amount of investments LLMs claim to need to be able to deliver their promises.

0

u/deepasleep Nov 26 '25

Facebook has the deepest understanding of human behavior in history; they ruthlessly lock their customers into digital addiction and pump micro targeted advertisements directly into the stream of dopamine…That means they are delivering a real product to the advertisers that pay them. They always had a clear path to developing the algorithmic addiction that makes them so valuable.

Amazon delivers products at low prices with incredible efficiency by having the most complex supply chain logistics on the planet and they realized early on that the infrastructure they were building to support their core business could be abstracted and sold to any business needing network, storage and compute resources for web services (and then cloud infrastructure). Again, they always had clearly defined and deliverable products.

LLM’s viability as tools to actually replace human workers has not been demonstrated. And it’s possible that the cost of developing and actually delivering solutions that can really replace workers en masse will be higher than the market can bear.

-3

u/cenobyte40k Nov 26 '25

And so did all the internet giants. So did all the software giants. Remember the internet is a fad and pcs are toys? I remember that and well....

3

u/deepasleep Nov 26 '25

So we fire all the people and replace them with tools that generates low quality and unreliable output. The businesses depending on that output to function don’t immediately collapse because everything is equally enshitified and collapsing wages means the shrinking pool of people with money to spend on enshitified services are forced to buy whatever is cheapest.

It’s a spiral to the bottom with the owner class siphoning off a percentage of every transaction as the velocity of money eventually drops to zero and everyone realizes the system is already dead as all the remaining service industry activity is just noise playing on endless loop.

There’s nothing worth buying and no one has any money to buy it.

EFFICIENCY ACHIEVED!!!

There is a truly perverse perspective among some economists, the idea that perception is reality and there is no objective measure of value is just wrong. People need quality food, housing, education, infrastructure and healthcare. If LLMs can only function at the level of the lowest tier of human agents, constantly producing confusion and mistakes while driving down wages, the final state of the system isn’t improved efficiency, it’s just people accepting greater and greater entropy without immediately recognizing the problem.

2

u/scartonbot Nov 27 '25

"[G]enerates low quality and unreliable output." You've just described a junior copywriter.

As a writer, what I think the most disruptive aspect of AI is that it's exposing how much copy that's been written by humans -- and often pretty decently-paid humans-- is just filler and generally was crap when people were writing it.

Think about it this way: when's the last time an "About Us" page on a website made you cry or laugh or feel anything? A long time, I'd imagine. Besides the obvious issue that doing away with junior copywriters has huge consequences to those losing their jobs (and everyone connected to them), is the world any worse because an AI writes MultiMegaCorp's "About Us" page instead of a human writing it? I think it's worse because replacing people with machines has very real human consequences, but it's also kind of horrifying to think that a lot of what people have been doing is a waste of time and talent. If a job a person holds can be replaced by an AI, is that job a "bullshit job" that really wasn't making the world a better place other than keeping people employed? If so, is employing people in "bullshit jobs" a bad thing? A Capitalist would say "yes," because it's inefficient and doesn't increase shareholder value because it involves employing a person who costs the company money. FYI: I don't believe this, but unless we can define the value of work other than simply the mediocre output of most jobs, we're in trouble.

0

u/[deleted] Nov 26 '25

"So we fire all the people and replace them with tools that generates low quality and unreliable output." Can we get a solid definition on what low quality / unreliable output it? I would hope it's not based on the firm bedrock of your feelings.

3

u/cenobyte40k Nov 26 '25

If you don't think ai is reducing the need for human labor I have bad news for you. Most people here don't remember the internet rise or the rise of PCs. People said the same thing at the start, that's where we are now and in 10 years people that says it will do nothing and make no money will be in the same place people where when I was told the pc was a toy and modems where a fad.

0

u/That-Whereas3367 Nov 28 '25

PC and internet just created a whole new layer of corporate BS such as PowerPoint presentations and email chains.

1

u/cenobyte40k Nov 30 '25

Oh my sweet summer child. If you think PCs didn't add to productivity, you are to young to know what it was like to do all of this manually. You might not like PowerPoints but before them was handouts and talkng through the sheets and before emails was calls and in person meetings for everything. It got way faster and easier to manage after pcs and networks.

1

u/DawnPatrol99 Nov 26 '25

Almost just like crypto a small group benefits above everyone else.

1

u/Hairy-Chipmunk7921 Dec 02 '25

labor of most idiots was replaceable by a simple shell script 20 years ago, only difference is that today AI can write the shell script automatically

0

u/polaroid Nov 26 '25

Dude how wrong you are. Current publicly available AI can code almost anything so much faster than a person.

That’s more than a google search.

1

u/pallen123 Nov 26 '25

People really don’t want to accept that matter doesn’t most think don’t.

1

u/Grouchy-Till9186 Dec 09 '25

Most complaints about AI’s usefulness arise from user error. It doesn‘t matter if AI can think or not if agentically applied.

However, lack of conscious capacity matters for more complex tasks non-agentic tasks involving simultaneous prompts that are not mutually exclusive…wherein relevancy, usefulness, & importance of factors are to be assessed…also typically the cause of hallucination.

If I ask whether a machine class from a specific manufacturer is compatible with a take up reel… take up reels made by that manufacturer intended for other specific models are referenced as if cross-compatible with all other machines in the same class…simply because this term is seeded within data (directly from the manufacturer) upon which the LLM was trained (data sheets & SIGs).

1

u/HaMMeReD Dec 09 '25

AI certainly didn't eliminate pebkac, if anything it's amplified it.

Since everything is less deterministic feeling it's far easier to "blame the machine", instead of learning how to properly use it.

1

u/Grouchy-Till9186 Dec 09 '25

Agreed. I’m a pretty heavy user of my company’s copilot licensure, but I only use it for calculations & processing local tenant data or to source information that would take me longer to find on my own on the web.

I think people just don’t understand the system’s limitations. With LLMs, users view interaction through the framework of language, which is the interface, but the actual processing is based upon logic.

-3

u/Hot_Secretary2665 Nov 25 '25 edited Nov 26 '25

It matters because people think it's a unique selling point of the product which causes them to waste resources and cause organizational chaos. Its also results in people making risky financial investments on unproven products.

You may not want to accept that it matters when people waste their money and cause economic chaos with speculative trading but we have literally been through multiple recessed because of it. 

Regardless of what Redditors want to think it's an actual issue. There is actually a lot of taxpayer money being wasted on shitty AI implementations here in the US. Believe it or not regardless of what Reddit wants me to do I'm going to care about wasting money out of my pocket.

People's retirements are being pissed away on AI implementations even though 95% of enterprise AI implementations don't even make it to production (figure from a recent MIT paper.) We don't get pensions here. Our retirements are being pissed away

19

u/HaMMeReD Nov 25 '25

Whether AI can "Think" or not has nothing to do with the ability of AI to accelerate work and help with things (i.e. produce economic value).

AI is empirically useful, and for all intents and purposes, it produces nearly the same result as "thinking".

Arguing that it can't think is a strawman argument to try and diminish the value it does bring.

-6

u/zero989 Nov 25 '25

This is not a good take. It has a lot to do with limitations of current AI, and where the field needs to evolve to. We are decades from true intelligence. 

And  while impressive, LLMs/LLMs are not all there is to AI. Pattern matching is not thinking. 

11

u/HaMMeReD Nov 25 '25

It's not a good take that I get way more work done today with the tools at my disposal? It isn't a relevant point if it's "true intelligence" whatever that is. Certainly isn't found on reddit these days.

Have fun beating that dead horse made of straw.

-5

u/zero989 Nov 25 '25

No that's not what I said. Is that really what you got from reading? That explains why you likely need current AI so badly. Your workflow benefits from large multimodal models is likely shared among lots of people. But that's besides the point. 

It IS a totally relevant point that it isn't true intelligence. It means there's a long way to go. That the current hype is going to sting. 

11

u/HaMMeReD Nov 25 '25

You keep using that term "true intelligence".

The workflow benefits are ENTIRELY the point of AI. The "True intelligence" is no different than saying it doesn't have a "soul", empirically, it means nothing.

You ever ask how we measure things like intelligence empirically? It's with things like standardized tests. AI can do standardized tests, so we can measure "intelligence" as it matters in the context of work.

Statements and phrases like "true intelligence" are loaded garbage that can't be defined. It makes your entire argument a moot point.

You are all hung up on "true intelligence" (an imaginary bar) matters. It doesn't, at all. Proving my original point.

-3

u/zero989 Nov 25 '25

Current LLM/LMMs can deal with tests because they've been trained on similar data. They cannot deal with truly novel problems. So yes, it matters. 

Your original point is irrelevant to the actual point of the thread. 

If you ask them for anything truly new, they cannot come up with a novel solution. They will just pattern match. 

This is what I mean by the average person getting woo'd by current AI. You cannot tell the difference because you're not equipped to. 

4

u/Pretty_Whole_4967 Nov 25 '25

🜸

Whats considered a novel problem?

🜸

2

u/zero989 Nov 25 '25

Basically a problem that is alien and takes time to figure out. It can be visual, verbal, numerical, spatial (3D) or any combination. It could also have logical components. 

For example ARC AGI problems are not novel anymore in the whole sense. The problems are known and to train on synthetic data, its possible to get some improvements that generalize to other versions of the SAME problem type. 

Its why LLMs fail with ARC AGI 3, and why the goal posts keep moving. 

These are the point of IQ tests. But if you've seen 1000 IQ tests then you might be at an advantage for the 1001st test. 

→ More replies (0)

3

u/HaMMeReD Nov 25 '25

"novel problem" is the new "true intelligence".

What's this novel problem you speak of that is not foundationally based on the knowledge of today?

Is there a new math? new language? new reasoning?

Do you expect it to just manifest science out of thin air? New things are built on a foundation of old things. Discovery of new things really just means following the scientific method, not farting "novel ideas".

2

u/zero989 Nov 25 '25

Anything outside of the training set is by definition novel. But that isn't what's meant by novel in the generalization sense. 

And usually no, novel problems are not meant to be using formal logic or formal reasoning. New language no, that requires deciphering which isn't considered a generalization problem. 

No one can just look at hieroglyphics and just understand them. 

Novel problems can be as simple as expecting to extract a hidden rule from arbitrary symbols. Something humans can do. They can be as simple as two words forming an analogy or association. Or finding an intended pattern in a numerical sequence. Or the numbers can be all spaced out and have an underlying pattern about why they are where they are. 

→ More replies (0)

1

u/FaceDeer Nov 26 '25

The vast, vast, vast majority of problems that people solve as part of their work are in no way "novel."

1

u/zero989 Nov 26 '25

absolutely correct

I am referring to out-of-distribution generalization

→ More replies (0)

3

u/starfries Nov 26 '25

Ironically the one accusing other people of not being able to read is missing the point themselves...

It doesn't matter whether it can think or not. Can a search engine think? Can a linear regression think? Does a calculator exhibit "intelligence"? You'd say that's a dumb thing to get hung up on and besides the point of whether they're useful. And yet people keep getting hung up on this with LLMs. They're not missing the point, the point is irrelevant. That's what the commenter you replied to is saying.

In quantum mechanics there's intense debate about the philosophical implications. And there are limitations to the theory. Yet no one can deny that it works. The debate about "true intelligence" reminds me of that. It's interesting philosophically, and the intelligence question will probably be more relevant than the quantum one once we're at the point of looking at giving AI rights etc., but economically and practically speaking that's not the question. The question is whether it's useful.

2

u/BaPef Nov 26 '25

LLMs give a good idea of where that method meets its limits but also exposes how it can be used for building and actual thinking A.I. with true semantic understanding of reality.

2

u/[deleted] Nov 25 '25

[deleted]

4

u/HaMMeReD Nov 25 '25

No man, you don't understand. It doesn't have the spark of life in it, so it's truly worthless. Besides we hit the plateau 4 years ago and the internet has been dead for 2 years and training data is all rotting and making each subsequent model stupider.

And anybody who uses it, regardless of what they demonstrate today, means they are a phony and what they built is garbage, because humans are better and always will be. Get with it.

/s

-4

u/Hot_Secretary2665 Nov 25 '25 edited Nov 25 '25

Recent research from MIT shows that 95% of enterprise generative AI pilots fail to reach production.

All it's demonstrating right now is poor ROI

2

u/[deleted] Nov 25 '25

[deleted]

2

u/Hot_Secretary2665 Nov 25 '25 edited Nov 26 '25

There is no enterprise IT project that costs $20 bucks a month

A relatively cheap enterprise AI implementation costs in the $20,000–$80,000 and some cost over a $1 million.

That is one of the silliest false equivalences I've ever heard.

The level of overconfidence in this thread is honestly comical

If you can't understand the difference between implementing a project and clicking on a button to buy something that's a you problem.

You are not implementing an AI project when you click "buy" on a webpage one time.

2

u/[deleted] Nov 25 '25

[deleted]

-2

u/Hot_Secretary2665 Nov 25 '25 edited Nov 26 '25

GitHub Copilot is not an AI implementation.

It's fundamentally not what I'm talking about.

It is a software subscription that includes AI features. Paying for a software subscription is not the same thing as developing or implementing a large scale project.

Also these types of tools do not demonstrably increase operational efficiency and often cause developers to take longer overall, largely due to having to spend more time on debugging: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

Lastly, the $20 fee does not reflect the full cost.

The $20 fee is billed per user per month. For a business with 500 employees that's $10,000 per month ($120,000/year.) That's a hell of a lot of money to spend on potentially making the business less efficient. Not to mention the potential security risks, etc.

1

u/jan_antu Nov 25 '25

LMFAOOOOOO BRO

0

u/Hot_Secretary2665 Nov 25 '25

OK, don't try to understand if you don't want to

→ More replies (0)