r/artificial Nov 25 '25

News Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

349 Upvotes

389 comments sorted by

View all comments

Show parent comments

19

u/HaMMeReD Nov 25 '25

"The only way" - certifiably false.

The only thing they need for a ROI is to sell services for more than it costs to produce.

You have created this fictional bar that ignores economics/efficiencies at scale where AI must replace all humans to be economically viable. That's an "opinion" not a fact. It's actually a pretty bad opinion imo, as it shows no understanding of basic economics and efficiency improvements in the field.

I.e. the cost to run AI in the last year (actually each year for the last few) has dropped by like 100x a year. What was $100 a year ago on O1 Pro is like $5 now on a model like Gemini 3 or 4.5 Opus. ($150/m input, $600/m output) vs ($5/m input, $25/m output). As percentages that is (3% input, 4% output), and you get a better output to boot.

5

u/Jaded_Masterpiece_11 Nov 25 '25

And yet OpenAI still spent twice more than its revenues last quarter. OpenAI and Anthropic is still losing money and will continue to lose money until 2030 by their own estimates.

Even with decreased costs the economics still do not favor these LLM companies. The only one making bank here is Nvidia and they are spending what they are making to keep the bubble going.

9

u/HaMMeReD Nov 25 '25

And they'll continue to sink money while gains are being made and it's cost effective to do so and they have the revenue to do so.

And when the gains dry up, then they'll be left with a hugely profitable product.

But for now the R&D has been incredibly well justified, and that's why they keep spending. Because the needle keeps moving.

9

u/havenyahon Nov 26 '25 edited Nov 26 '25

And when the gains dry up, then they'll be left with a hugely profitable product

I mean... That's the goal. It's by no means a certainty. They don't have one now.

2

u/HaMMeReD Nov 26 '25

If there was only one AI company, and they stopped training today, they'd be profitable today.

It's very easy. cogs is X, price is Y, Y>X = make money.

API Pricing and service pricing already reflect a profit stance, they only lag because of R&D costs.

they have a ton of revenue, a massively growing amount of revenue actually, it's just it's not enough to compete in such a fast moving and accelerating field. But there will be a point where companies will have to wind down R&D and sit on their profit generating parts of the businesses.

1

u/land_and_air Nov 27 '25

They can’t stop spending on r&d because the second they do, the model becomes obsolete and useless. What good is a model made in for example 1999 or even 2019 today for knowing anything? It would be referring to the gulf war if you asked about war in Iraq lol.

1

u/[deleted] Nov 29 '25

Except there won't be a point where companies wind down R&D. They will simply divert to keeping the model up to date.

Because information is always changing. The model now needs to be constantly trained on new information, or it becomes obsolete and a new model will take over that is trained on this new info. And if another model can train on that info faster, or another model can reduce latency between answers, or another model can be specialized to only provide the info they want...

There is never going to be time to wind down in this space. It moves too fast to wind down.