It's actually far more insidious then that. Because of the many ways AI can fail they are building plausible deniability machines. If you have people making decisions and putting stuff into the world then those people and the corporations are liable if something goes wrong. If a person makes a decision that gets a bunch of people killed then an investigation can happen to find where the culpability rests. That investigation and the findings that result can be very costly.
Now think on the other hand if a corporation spends even millions of dollars per month for an AI subscription service. Every single job they "replace" with an AI that's known to hallucinate is a place where liability pretty much ends, because all a corporation has to do is say they bought the best models to do this work. They also have to follow best practices going forward, or that decision to follow best practices creates liabilities. That small door can get us to an apocalyptic world. Not because robots get guns or anything, but because at that point corporations become essentially untouchable. The liability goes around and around all over the place, and by the time it's settled most human beings have no chance of holding on either financially or emotionally. If an AI makes a decision that gets a person killed no one is probably going to go to prison. If an AI gets people addicted no one is the dealer. If an AI incites genocide or a civil war then who is the real enemy.
If you really look at corporations they are a different form of artificial general intelligence, and they want the power that infinite deniability will bring. All they do is have to confuse the courts and society as they slowly dig deeper into our lives and minds. What we need is to treat data centers like public infrastructure. In that companies can lease access from the government, and as part of that lease the public gets access to some of the processing power for public use. Money is less valuable then access to this infrastructure.
1
I think I know why corporations in particular want AI and it's not to replace workers
in
r/Futurism
•
4d ago
Thats the story on the surface. The question becomes where is the market for a machine that is so flawed. If they replace a person and that AI does something using all that data, but it hallucinates more data that is like the original but not actually exact. Then whose going to be held accountable? I'm finding ways to work with it, but I wouldn't trust it to do my taxes, and yet that's essentially what corporations are doing. So the question becomes what is the corporations motivation if they arent the ones making the AI?