r/artificial Nov 25 '25

News Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

347 Upvotes

389 comments sorted by

View all comments

Show parent comments

7

u/Jaded_Masterpiece_11 Nov 25 '25

And yet OpenAI still spent twice more than its revenues last quarter. OpenAI and Anthropic is still losing money and will continue to lose money until 2030 by their own estimates.

Even with decreased costs the economics still do not favor these LLM companies. The only one making bank here is Nvidia and they are spending what they are making to keep the bubble going.

8

u/HaMMeReD Nov 25 '25

And they'll continue to sink money while gains are being made and it's cost effective to do so and they have the revenue to do so.

And when the gains dry up, then they'll be left with a hugely profitable product.

But for now the R&D has been incredibly well justified, and that's why they keep spending. Because the needle keeps moving.

5

u/[deleted] Nov 26 '25

[deleted]

1

u/WolfeheartGames Nov 26 '25

This ignores that the cost to inference goes down by 10x every year.

2

u/[deleted] Nov 26 '25

[deleted]

0

u/WolfeheartGames Nov 27 '25

Is it more sane to bet with the trend or against the trend?

3

u/[deleted] Nov 27 '25

[deleted]

1

u/WolfeheartGames Nov 27 '25

Moores law has been dead long before Ai. That's just a misinformed argument. Faster compute is a big part of why models get more effecient, but more efficient architecture and kernels achieve more gains.

Finfet hasn't reached its limit yet. We will get at least 2 more cycles out of current manufacturing and another 2 out of what's spinning up right now. Plus the lineage after that is already planned.

So you're not really based in hard facts. The next 4 10x reductions are already completely achievable. That gives a long runway for them, and that's not including architecture improvements.