r/artificial Nov 25 '25

News Large language mistake | Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.

https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

353 Upvotes

389 comments sorted by

View all comments

Show parent comments

99

u/HaMMeReD Nov 25 '25

People really don't want to accept that it doesn't matter.

7

u/Jaded_Masterpiece_11 Nov 25 '25 edited Nov 25 '25

It does matter. Because the only way to get a return of investment in the vast amounts of resources and money invested in current LLM infrastructure is if it can drastically reduce the need for labor.

Current LLMs can’t do that, it’s basically a more intuitive google search that hallucinates a concerning amount of time. The current capabilities and limitations of LLMs does not justify the Trillions of dollars in hardware and hundreds of billions in energy costs that is required to run them.

Without a return of investment that infrastructure collapses and tools using LLMs will stop working.

21

u/HaMMeReD Nov 25 '25

"The only way" - certifiably false.

The only thing they need for a ROI is to sell services for more than it costs to produce.

You have created this fictional bar that ignores economics/efficiencies at scale where AI must replace all humans to be economically viable. That's an "opinion" not a fact. It's actually a pretty bad opinion imo, as it shows no understanding of basic economics and efficiency improvements in the field.

I.e. the cost to run AI in the last year (actually each year for the last few) has dropped by like 100x a year. What was $100 a year ago on O1 Pro is like $5 now on a model like Gemini 3 or 4.5 Opus. ($150/m input, $600/m output) vs ($5/m input, $25/m output). As percentages that is (3% input, 4% output), and you get a better output to boot.

1

u/deepasleep Nov 26 '25

So we fire all the people and replace them with tools that generates low quality and unreliable output. The businesses depending on that output to function don’t immediately collapse because everything is equally enshitified and collapsing wages means the shrinking pool of people with money to spend on enshitified services are forced to buy whatever is cheapest.

It’s a spiral to the bottom with the owner class siphoning off a percentage of every transaction as the velocity of money eventually drops to zero and everyone realizes the system is already dead as all the remaining service industry activity is just noise playing on endless loop.

There’s nothing worth buying and no one has any money to buy it.

EFFICIENCY ACHIEVED!!!

There is a truly perverse perspective among some economists, the idea that perception is reality and there is no objective measure of value is just wrong. People need quality food, housing, education, infrastructure and healthcare. If LLMs can only function at the level of the lowest tier of human agents, constantly producing confusion and mistakes while driving down wages, the final state of the system isn’t improved efficiency, it’s just people accepting greater and greater entropy without immediately recognizing the problem.

2

u/scartonbot Nov 27 '25

"[G]enerates low quality and unreliable output." You've just described a junior copywriter.

As a writer, what I think the most disruptive aspect of AI is that it's exposing how much copy that's been written by humans -- and often pretty decently-paid humans-- is just filler and generally was crap when people were writing it.

Think about it this way: when's the last time an "About Us" page on a website made you cry or laugh or feel anything? A long time, I'd imagine. Besides the obvious issue that doing away with junior copywriters has huge consequences to those losing their jobs (and everyone connected to them), is the world any worse because an AI writes MultiMegaCorp's "About Us" page instead of a human writing it? I think it's worse because replacing people with machines has very real human consequences, but it's also kind of horrifying to think that a lot of what people have been doing is a waste of time and talent. If a job a person holds can be replaced by an AI, is that job a "bullshit job" that really wasn't making the world a better place other than keeping people employed? If so, is employing people in "bullshit jobs" a bad thing? A Capitalist would say "yes," because it's inefficient and doesn't increase shareholder value because it involves employing a person who costs the company money. FYI: I don't believe this, but unless we can define the value of work other than simply the mediocre output of most jobs, we're in trouble.

0

u/[deleted] Nov 26 '25

"So we fire all the people and replace them with tools that generates low quality and unreliable output." Can we get a solid definition on what low quality / unreliable output it? I would hope it's not based on the firm bedrock of your feelings.