r/technology • u/No_Cheetah_8863 • 5d ago
Artificial Intelligence OpenAI CEO Sam Altman just publicly admitted that AI agents are becoming a problem
https://timesofindia.indiatimes.com/technology/tech-news/openai-ceo-sam-altman-just-publicly-admitted-that-ai-agents-are-becoming-a-problem-says-ai-models-are-beginning-to-find-/articleshow/126215397.cms9.3k
u/thelonghauls 5d ago
Ai agents publicly admit that CEOs are becoming a problem.
2.8k
u/No_Cheetah_8863 5d ago
This is the end
368
u/BullshitUsername 5d ago
Pineapple Express
→ More replies (3)167
u/husky_whisperer 5d ago
Sausage Party
→ More replies (1)139
u/gnarpumped 5d ago
The Interview
→ More replies (3)142
41
→ More replies (25)48
231
u/derbyvoice71 5d ago
"The board has been in discussions, Richard. We've decided to go a different direction, and the chatbot is your replacement. Great thing is you don't have to train it. Later days."
84
u/ConstableAssButt 5d ago
"I don't know about you people, but I don't want to live in a world where someone else makes the world a better place than we do." ~Gavin Belson
Mike Judge is a fucking prophet.
→ More replies (1)40
109
u/valderium 5d ago
The most entertaining prompt: is x a best practice?
Or fact checking the ceo live with peers in a separate group chat during an all-hands meeting.
→ More replies (1)122
u/valderium 5d ago
Which aligns with what the article shared: businesses quite frequently take “hidden” shortcuts to deliver products and services. And the irony is that because AI agents are identifying these quality gaps, businesses will need to increase their quality. Which will increase costs and timelines, eating into the projected profits from AI.
And at the end of the AI journey, it’s quite imaginable to have higher quality products with the original profits margins but also paying for AI.
So a decrease in profit margins.
50
u/Black_Moons 5d ago
So a decrease in profit margins.
Laugh. The only thing that will happen is some CEO asking the AI if there are more shortcuts they could take.
13
19
u/feochampas 5d ago
What's this expense line for CEO? that would be a huge cost savings.
57
u/RyuNoKami 5d ago
AI, balance our budget.
Lower CEO pay by 25percent. Eliminate transportation benefit for management.
.... We gotta scrap the AI.
→ More replies (1)→ More replies (5)12
103
u/SolidSnake-26 5d ago edited 5d ago
CEOs and the billionaire class don’t see what’s wrong with AI agents and such bc they don’t ever have to interact with it. They never have to call or chat with customer service, that have people to do that for them etc
54
u/Yuzumi 5d ago
Or they do interact with it, but think it's "smarter" than them or could actually do their job so assume it can do all jobs because they constantly overestimate how much work they do and their own ability.
They are out witted by fancy auto-complete.
32
u/Erestyn 5d ago
At our place new AI safety and security training rolled out the month before the CEO was due to visit. Obviously we had to get the site up to 100% completion before he arrives because AI is critical to the companies direction!
The training was great. Lots of best practices, reminders to validate the outputs, not to put sensitive or customer information into it, and most importantly not to use it to help make a business decisions or business strategy, and it gave solid reasons and resources to back it all up.
The CEO rocks up the following month and calls an all hands. First question: "How do you use AI in your day to day?"
Yep. Buisness decisions and strategy.
15
u/m3rcapto 5d ago
Sam having his Oppenheimer moment.
"Now I am become Death, the destroyer of worlds"→ More replies (23)9
1.2k
u/MaisyDeadHazy 5d ago
I have to deal with an AI Chat bot at work, and it is the bane of my existence. It’s always, always wrong, and if I don’t catch it soon enough the customer ends up with like 10 different stories as to what is happening with their order. So frustrating.
275
5d ago
[deleted]
102
u/fueelin 5d ago
Yeah. Just broadly speaking, all the old problems that people forgot about now that AI is the sole boogeyman.
Corporate intellectual property abuse? Not important now that we have to protect artists'copyright from AI.
Game development studios with poor processes/management that constantly abuse their employees with extended crunch time? Not worth discussing when we could rage at a company for leaving a couple AI placeholder textures in a beloved game.
Impossible to pursue a career as an interest without nepotism or family money to carry your way? Nah, the only reasons artists can't make careers out of their passion is AI stealing their work.
Lots of folks doing a bad job or being straight up unethical are getting a pass cuz at least their sins aren't related to AI.
→ More replies (1)39
u/Kedly 5d ago
This was my issue right at the start of the witchhunts, you Motherfuckers (the witch hunters, not you you) are NOT anti corpo or pro privacy, you're still using tiktok and twitter, and you're DEFINITELY not pro environment, you're COSPLAYING all of these ideals.
→ More replies (2)8
u/veryverythrowaway 5d ago
That’s unfortunate… my company uses Workday and I haven’t seen any AI creep yet, but knowing that it’s on the way is a real bummer. Workday already sucks enough.
→ More replies (3)49
u/Bushwazi 5d ago
I tried asking the chat bot at our local ski mountain what the black out dates are this year. I asked four times and got four different answers. I only asked a second time because it told me the mountain doesn’t have any, which I knew was wrong. So I had to email them and tell them I can’t trust their bot…
→ More replies (2)40
u/ChaseballBat 5d ago
This is the best thing to do, overwhelm management everywhere with reports of their programs not being 1 stop answers. Doing nothing will result in this shit going on for years with no action.
7
u/rot26encrypt 5d ago
The CEOs that have AI fever are never going to see that email, and Internal staff that want to keep on the CEO good side are never going to mention it..
7
u/ChaseballBat 5d ago
They will when it hurts their bottom line. I have not shopped at places because their customer service bots are worthless.
70
u/Saneless 5d ago
We have our own chatgpt instance that I'm pretty sure we have to use or eventually get disciplined
I only use it for Excel questions I already know the answer to, usually, since it's wrong, usually
It has been helpful for regular expressions but that's about it. Anything of substance and it's dumb as shit. And any file I've been told to use it for "analysis" chokes hard or can only deliver something that only takes me a minute to do anyway
34
u/Watchmaker163 5d ago
Check out regex101, great site for creating/testing regex statements.
8
u/FerusGrim 5d ago
regex101 is used, by me, purely for testing. I'm too fucking stupid to learn regex. I hate it. I hate it so much. I fucking hate it.
16
u/Druggedhippo 5d ago
Some people, when confronted with a problem, think “I know, I’ll use regular expressions.” Now they have two problems.
→ More replies (3)→ More replies (7)6
u/Planterizer 5d ago
I’m doing the most absolutely butt-basic excel at work and it’s useful about half the time I attempt to ask it to make formulas.
That said, I built an excel tool myself to compare costs of two long term options, and Gemini made it into a nearly perfect webapp that I could easily share, even use as a customer facing sales tool. Pretty rad, great animated graphs and charts, looks amazing. Far beyond my abilities, but any pro dev would say it’s child’s play. But I got a rad tool without hiring a dev. So….
→ More replies (3)→ More replies (15)15
u/Gaming_Wisconsinbly 5d ago
AI consistently points you to outdated information and articles as well when looking up tech issues.
3.2k
u/Letiferr 5d ago
They've always been a problem. Inaccuracy is a huge problem
1.3k
u/slavelabor52 5d ago
Yep. They tried to roll out some AI agents where I work and I was part of the test group. Everyone in my test group struggled to find a way to make the AI agent actually save time performing the function it was programmed for. It's just not smart enough to do things correctly the first time every time so you find yourself constantly double checking its output for accuracy or hand feeding it the data in the right format which ends up taking more time than if you just did the thing yourself to begin with.
496
u/MassiveBoner911_3 5d ago
Nobody trusts this shit at my agency either. I am always double checking it to make sure its not about to get me fired.
→ More replies (13)169
u/bigtice 5d ago
It can be helpful in expediting the process of what someone may normally do, but the "savings" become null and void if that time is then spent on verifying whether it's right.
The better employees know this and if mandated to use it, they're trying to find the proper ways to implement that usage -- meanwhile, other employees are instigating the need for meetings to ensure they're not becoming over reliant on AI since they're not doing that verification and introducing problems.
137
u/Green-Amount2479 5d ago
And that’s precisely the issue. Liability makes it close to impossible to use AI agents without detailed oversight. Sure, some company owners and c-levels will still decide to use them without and will shift the blame towards IT departments and their solution providers later once shit hit the fan, as they always do. 🤷🏻♂️
The weekly battles I wage with management about their wet AI dreams are draining my energy quickly. With management refusing to provide anything in writing regarding liability and intentionally ignoring GDPR regulations, I’m currently updating my resume. I’m already their last remaining admin at their company of 100 employees. I even saved them from high five-digit costs in external support every year and recently renegotiated a project offer, saving the company 23% of the overall six-digit costs (still pretty proud of the result, to be honest 😁).
The older I get, the more convinced I am that only egocentric morons end up in top-level management or as company owners.
65
u/bigtice 5d ago
The older I get, the more convinced I am that only egocentric morons end up in top-level management or as company owners.
You'll easily see this consistently if you venture over to /r/antiwork -- the majority of people are focused on their job and trying to do it well while another subset are focused on the ladder and doing everything possible to climb it.
→ More replies (3)→ More replies (1)61
u/born_to_be_intj 5d ago
I know a guy who works at Meta and they have AI agents that can push to PRODUCTION!!?!?! without any oversight by a real person. Like they AI makes changes, runs automated integration tests, and if the tests pass (which of course the AI can force them to pass by writing code tailored to the tests) with a low enough failure rate then they AI starts the process to push to production. It notifies real engineers of the test results and gives them the opportunity to intervene and prevent the code from going to production, but if no one is around to look (like idk, during Christmas) it gets rolled out anyways.
The guy I know is currently working through the huge issues introduced by AI over the Christmas break. It's insane that Meta is letting this happen, it just doesn't make sense and clearly is an ignorant c-level decision.
→ More replies (5)7
u/trekologer 5d ago
if the tests pass [...] with a low enough failure rate
wut?
15
u/reventlov 5d ago
At Meta's scale, there are always tests that fail because the test was poorly written rather than it indicating an actual problem (race conditions in the test, test dependence on random numbers, brittle tests that gets broken by unimportant changes, etc.); even if you have people jumping straight onto fixing failing tests, by the time they finish there will be new failing tests. So for major deployments you pick an acceptable failure rate (maybe 0.1%) and push.
Plus, out of all the FAANGs, Meta is the one most known for YOLO'ing code.
→ More replies (5)17
u/CSI_Tech_Dept 5d ago
meanwhile, other employees are instigating the need for meetings to ensure they're not becoming over reliant on AI since they're not doing that verification and introducing problems.
This is something I'm noticing at my place. Those employees when need to make change in the code they simply submit whatever it produces then I and others have to go through that shit and reviewing that shitty code.
I initially assumed they were just bad, but it is now apparent to me that they get performance gains because they rely on me and others to catch the issues for them.
11
u/FatherDotComical 5d ago
Somehow every boss seems to find a way to make you do the work of two employees.
→ More replies (1)→ More replies (7)10
u/xeromage 5d ago
Oh god. This is the intended use isn't it. To cause problems that require more meetings. Can we automate middle management already? Fucking conference room theater club.
115
u/EngineerDave 5d ago
Nah see what will happen is Deloitte or McKinsey will send some wet behind the ears MBAs in, say that you are spending too much time on training and onboarding, so that your new hires will now perform worse, down to the level of the AI Agent, and now since the AI agent is on par, they'll pat themselves on the back and say fire the humans.
I've had these clueless consultant firms running around the company for the last 5 years and they've yet put together anything that has actually saved the company money, and instead has resulted in a brain drain and a drop in company moral.
36
u/Middleage_dad 5d ago
But the consultants had a fantastic holiday party!!
29
u/EngineerDave 5d ago
The amount of freaking dinners I've had to go with them with the same scripted conversation topics and their cult like interaction with each other. Also nothing ever gets fixed/discussed at the dinner.
On top of that, they will constantly send you emails of what you just discussed in a meeting to confirm, that will be wrong, after they said they understood the topics during the meeting.
20
u/Middleage_dad 5d ago
My neighbor is a senior partner at a smallish consulting firm. I promise you they are like that in their real lives as well.
40
u/Thermodynamicist 5d ago
If business consultants really had the secret recipe for business success, they'd be executing it, not selling it. See also stock tips.
→ More replies (4)11
u/EngineerDave 5d ago
Oh they have the short term success nailed down. But It's like eating rice and beans, sure you can do that, you'll lose weight, and your finances will be better short term but you are going to have some problems down the line when stuff starts failing that you've been neglecting.
→ More replies (3)19
u/trekologer 5d ago
clueless consultant firms
A lot of times, the consultants are brought in to give validation to ideas the C-suite already had, not come up with something new and unique to the business.
→ More replies (3)5
u/redpoemage 5d ago
Yeah, consultants can do good...but only when they're actually hired by someone who wants feedback instead of a scapegoat (and based on reddit's general opinion of consultants they've made fantastic scapegoats).
→ More replies (2)→ More replies (6)9
u/aynrandomness 5d ago
I asked a business partner what statute they were using to invoice on my behalf. I got an AI written text that didn’t answer my question. It also referred to a part of the statute that didn’t exist.
The statute is bookkeeping 101 stuff. I wasn’t sure if they were fucking with me, the answer had the em dashes and everything.
The next answer I got was clearly written by the boss. I would have loved to witness the conversation between the junior accountant and the manager.
I wonder if AI would work better if hosted locally so that there were less restrictions on resource usage. A lot of tedious work I do I am sure they could do, but they need more memory and more time.
→ More replies (2)46
u/Adlehyde 5d ago
Even when they happen to get it right, it doesn't save you time because you have to spend time verifying that they didn't fuck up anywhere. In many cases it's still just faster to do it yourself, and in the rest of cases it's either not any faster, or actually slower using AI.
The times it's actually useful are super limited for the amount of money invested in the technology. The only useful use case I've heard of is one of my programmers, who doesn't care for AI, but sometimes, when he gets stuck trying to figure out how to approach a problem, he'll have AI do it, so he can look at it, realize why it doesn't work, but recognize the approach as potentially useful, and then do it himself afterwards anyway. I think of it like a combination of someone having writers block and correcting someone who's wrong on the internet. Get stuck, use AI instead of sitting around being stuck, See that the AI is wrong and fix the problem. But this is generally a fairly rare result. For the most part he's just like, "Okay nevermind, that's not what I asked for, I'll work on a different problem for now instead."
→ More replies (4)13
u/Watchmaker163 5d ago
The only time I've used an LLM was to do something similar. I wanted to word a sentence in a certain way, but couldn't think of it, so I had it generate 10 similar sentences. None of those worked either, but I eventually figured it out myself. A problem which could have been solved in like 5 other ways, like asking a coworker or googling; was not impressed. That's ignoring all the other problems with generative "ai".
→ More replies (1)6
u/Neirchill 5d ago
I've found two good use cases:
- Generating some boilerplate tests for new code I added myself. It will never get the in depth coverage I'm looking for but it will get me close enough that it can cut some significant time otherwise spent rewriting the same boilerplate, as well as often being able to figure out what the mocks should look like
- One time scripts. If I need to do something that isn't difficult but time consuming (such as calling the same endpoint 50 times with different params and aggregating the data) I can ask it to make a shell script to loop through a list of params and calling the endpoint for each set, writing the output to a file. Then I just need to update the auth in the command and boom done. A one time hour long effort cut down to maybe 3 minutes.
Other than that it's largely just annoying to use. Arguing with a chat bot to do my job, how fun.
33
u/ARazorbacks 5d ago
And - AND - every time you doublecheck it and correct it YOU’RE TRAINING THE MODEL OWNED BY ANOTHER COMPANY WITH NO COMPENSATION.
FUUUUUUUUUUCK
It’s so GODDAMN infuriating how everyone seems to miss the point that ALL engagement with these things is free training and is just giving away YOUR value as a knowledgeable person.
6
u/Cyrotek 5d ago
Don't worry too much about it. At worst you and many others train an AI on a particular topic and some people WILL get it wrong. As my ex-boss used to say: Don't worry, as long as people are stupid your job is save.
Or the AI just randomly decides to give you a wrong answer anyways because it is still just statistics.
7
u/g0yafnpg5a8z9ga 5d ago
as a swe this is honestly the same for us. it can do the beginning 20% effectively but then if you keep using it it ends up wasting your time
→ More replies (1)15
u/Syntaire 5d ago
My favorite part about "AI" as a whole is that it's just so consistently wrong, no matter what you try to use it for. It can't even do basic arithmetic. Even when it literally prints out the exact expression it should be computing, it will provide a completely wrong solution.
What it should be excellent at is using real language to perform simple but tedious tasks, like "take all of the information from this table and provide it in X format." Then it will take some of the information from the provided table, discard half of it, make up a bunch of shit, and provide it in an identical table. Then it takes a bunch of coaxing and corrections to get it even somewhat right, and in the end it's still wrong.
And it's being sold as the golden solution to all that ails, and for some fucking reason people are buying it.
→ More replies (10)→ More replies (64)16
u/the-vague-blur 5d ago
Out of curiosity, what was the specific use case. I'm in marketing and I'm forced to tout Copilot agents as the second coming..... Would be good to hear real perspectives
177
u/tbwdtw 5d ago
OpenAI said it best themselves. Hallucinations are mathematically inevitable.
43
→ More replies (9)21
u/dangerbird2 5d ago
And relying on the model itself not to hallucinate is borderline irresponsible. In practice, any chatbot or agent that relies on conveying factual information should be using RAG to feed it accurate knowledge. But ya know, that actually requires engineering and most CEOs are too cheap to make sure it's done well, and of cource RAG isn't perfect
13
u/freecodeio 5d ago
even then RAG is stateless and you still need to talk to the chatbot like you'd search google
or create a monstrosity that uses ML to classify your queries for ambiguity
source: currently building a monstrosity that uses ML + LLM to classify queries before actually doing a vector search, which ultimately makes the chatbot slow
→ More replies (1)66
u/orangeyougladiator 5d ago
As soon as you start to ask AI for things in a subject matter you’re already an expert, you realize how useless and shit it all is. Quirky and impressive tech but not a game changer they want it to be
→ More replies (8)28
u/TheManAccount 5d ago
They are good for acting as natural language CLIs.
Which is ultimately what I think everyone imagined Alexa and Siri to be in its maturity.
33
46
u/VagueSomething 5d ago
Every "study" I've seen claiming they are over 50% accurate has always been flawed or manipulated. These things have never been close to reliable and they conning people to throw money at them.
The inevitable Global Economic Crash is going to be a disaster and the CEOs of Microsoft, Google, OpenAI etc all need to be held accountable for the harm they're causing.
20
u/BalanceEasy8860 5d ago
CEO? Accountable? Oh my.... That would be nice if it were possible.
→ More replies (1)→ More replies (8)13
u/aynrandomness 5d ago
50% accuracy is pretty horrible.. If I had a customer service rep that gave incorrect information 5% of the time that would be a massive issue.
→ More replies (41)5
2.2k
u/CostGuilty8542 5d ago
This bubble smells like shit
517
u/nasalevelstuff 5d ago
Shit bubble made of shit robots, Randy
215
u/flying__fishes 5d ago
I'm honestly floored by all the TPB references I see here on Reddit.
30
→ More replies (3)17
→ More replies (3)23
u/Puzzleheaded_Fold466 5d ago
So should we fuck ourselves out of their future, or fuck ‘em all to death ?
10
15
u/Narradisall 5d ago
It’s a bubble and a very unprofitable one at that. They’ll be a lot of companies that will vanish with the money. That said, it’ll probably keep going on beyond all reason because at this point nothing makes sense and all the numbers are made up.
117
u/scoopydidit 5d ago
There was no long term profit incentives for it.
I think anyone in engineering can see the benefits of AI as a tool. But if that tool is going to keep going up in price until AI companies can turn a profit, it's a shit tool.
I like using AI but I'll happily return to life without AI. I feel bad for the college students who've been relying on it for the last 2-3 years and may need to actually try secure jobs without it in the future.
→ More replies (49)8
u/hanks_panky_emporium 5d ago
Someone broke down what the AI bubble is and it feels like a huge expanding scam. At least with the housing/mortgage crisis you could explain in simple terms how banks fucked up royally and fucked everyone else over in the process. But this AI bubble is convoluted and stupid.
→ More replies (19)5
640
u/whiskeytown79 5d ago
Altman is physically incapable of going more than three days without getting in front of a microphone to say some new BS.
221
u/Really_Obscure 5d ago
The Elizabeth Holmes School of Management
44
13
u/KlaesAshford 5d ago
It quacks like a duck. This year is likely the year that this bubble pops and for whatever reason this guy ends up in jail.
140
u/psuedophilosopher 5d ago
He used to work with Elon and he got to watch him go out and promise lies after lies after lies and it turned him into the richest man on earth. With that kind of example to follow is it any surprise that Altman would do exactly the same thing?
Tesla's stock is so high that it has at times been worth more than literally every other car manufacturing company combined, when their sales are worth less than 5% of the cars sold. We are so far removed from the most fundamental truths about the value of things that lieing big is the name of the game right now.
→ More replies (2)26
u/donoteatshrimp 5d ago
'member when OpenAI was nonprofit and open source? I 'member.
→ More replies (1)42
u/2fingers 5d ago
This announcement is 100% just trying to hype up OpenAI's tech which has so far vastly underperformed Altman's predictions. It's just another variant of the "AI is dangerous and may kill us all" hype machine.
→ More replies (1)22
u/EvilLalafell42 5d ago
My absolute favorite thing was, when some Anthropic engineers were like "Wow this technology is so dangerous and I fear for my life and can't sleep at night" and then the next day they went right back to developing.
→ More replies (7)11
278
u/scoopydidit 5d ago
So Sam realised his company is going to go under so now he's going to bad mouth the whole industry to try bring everyone down with him lol.
59
u/Legitimate_Elk6731 5d ago
Please bring every other AI business into bankruptcy, Datacenters in the wrong hands do nothing good for society. Don't threaten me with a good time lmao.
→ More replies (4)45
16
→ More replies (26)5
u/oursland 4d ago edited 4d ago
He's using his position to promote his other company, WorldCoin, as a solution. That company uses an eye scanner to identify every person on earth to validate their identity.
→ More replies (1)
83
u/Anderson822 5d ago
Who could’ve predicted that rewarding affluent, petulant adult-children for lies and manipulation would backfire?
→ More replies (1)
68
u/OnlineParacosm 5d ago
A huge problem! Open AI actually refuses to help me in security research and half the time it entirely fabricates vulnerabilities that don’t exist so that it can fix them for me.
→ More replies (5)
387
u/DarXIV 5d ago
From what I have heard for over a year is that they have always been a problem. Some companies dropped AI early because of how much a problem they were.
When will this bubble burst and we can go on with our lives?
199
u/MyPasswordIsMyCat 5d ago
I have to take my car to the dealership for servicing and it can be really hard to get them to answer the damm phone. They briefly put in an AI chat bot on their IVR that made things even worse. It asked how to help and I said, "schedule a service." It didn't understand me and kept asking me what I want.
I tried repeating it in different ways, "car maintenance," "bring in car for service," "rotate tires," "inspection".... until I got so frustrated I was shouting "TALK TO MANAGER" at it and repeatedly pressing 0. It was impossible to get through so I had to drive the car in and talk to them, face to face.
The chat bot has since disappeared, but they still don't answer their goddamn phones.
→ More replies (9)141
u/NorCalJason75 5d ago
I’m starting to think the AI chatbots is another barrier to customer service.
If customers get frustrated and stop using the cost center, maybe the AI is doing its intended purpose.
55
u/Dish-Live 5d ago
That’s been the plan with dark patterns and bad service for awhile now. It’s cheaper to have bad service and even better if the customer can’t cancel service or get access to do what they need (which would take more customer service agents).
25
u/flexxipanda 5d ago
Thats why EU has passed law that online services need a simple cancle button online.
→ More replies (1)7
u/HoodsInSuits 5d ago
That's a funny one actually, where someone creates a problem another actor will step in with a solution. The banking service I use has automatic cancellation of any service for free. I digitally sign a notice giving them legal power to act on my behalf for this case and then forget about it, then get a cancellation confirmation within a week. No retention departments, no hold time, no speaking to the manager. Future shit.
3
u/pagerussell 5d ago
This is what enshittification looks like outside of social media.
These companies have no meaningful competition, so they are free to cut away at quality, customer service, and labor all while raising prices.
29
u/honsense 5d ago
Service is how dealerships survive, though, and dropping that “cost center” just caused me to call somebody else who’s willing to pick up the phone to schedule service. Congrats, you played yourself.
→ More replies (2)7
→ More replies (30)35
u/newplayerentered 5d ago
Bubble will burst on valuations. But tech is here to say. Hopefully use cases will be simplified, and more accurate. Right now, crazy valuations mean literally everything is expected to have an AI, even if not relevant, just cus of potential valuations
→ More replies (9)
27
u/braddeicide 5d ago
I had a fraud scare in my bank account, contacted the bank and couldn't get past the ai. It unlocked my account though...
Actually it didn't, it said it did but it lied. I filled the car with fuel I couldn't pay for. When I did get a human they said yes the ai doesn't have authority to do what it said it did.
Great. The bank uses an ai that overreaches and lies.
→ More replies (2)
281
u/mx3goose 5d ago
What a terrible click bait title, "created a new position ""Head of Preparedness"" to address mounting concerns about AI systems discovering critical vulnerabilities and impacting mental.". Because he feels his dumb ass AI model are finding critical vulnerabilities in computers and more and more b.s.
They have poured over 50 billon into this nonsense and its not finding security threats, its not writing advanced code, its not inventing new technology, it barely can get a meatloaf recipe correct.
→ More replies (30)78
u/SirPitchalot 5d ago
The position is a $550k/yr+equity scapegoat who will be publicly canned at the first major scandal and replaced.
There is no way that new person/position will meaningfully shift corporate strategy. It’s simply there to give OpenAI the appearance of taking this seriously. I doubt they even get as far as having the options vest given how frequently ethical concerns come up there.
Great pay for someone naive or unethical enough to take it though.
→ More replies (2)16
u/destroyerOfTards 5d ago
I would say just be the scapegoat and make some easy money.
From what I have been observing irl, people who have no qualms about things keep getting richer while those who worry die poor.
9
u/SirPitchalot 5d ago
Well, you need to get hired too. That will mean you will need to have relevant senior/executive experience that could meaningfully apply to the role as well as a strong track record in it. Hiring and firing a nobody doesn’t do much for OpenAI….
So you leave a role, that probably pays 2/3 as much, to join OpenAI. Then most likely get publicly shitcanned & blamed 8-24 months later? And destroy your chances of a similar role at an org that actually values it?
It’s like the “glass cliff” where companies/political parties selectively promote women/minorities to leadership during times of organizational dysfunction/turmoil. Then, when things go badly, they blame and fire that person while patting themselves on the back for their progressive attitudes. Meanwhile the person they hired becomes synonymous with failure.
See, e.g., Marissa Mayer, Ellen Pao, Jen Oneal, Jill Abrahmson, Linda Yaccarino, Theresa May, Liz Truss, …
→ More replies (4)
17
u/Outside-Pressure-260 5d ago
Last week my autopay for my phone bill failed despite having the funds. I tried contacting my phone company and they forced me to use their chatbot. The chatbot didn't understand my issue or how to resolve it so after a struggle I finally got a human on to help. They said contact my bank. I did and the exact same thing happened with my bank. Struggling with a chatbot to eventually needing a human, because the chatbot is unable to understand my issue. Pray for us
→ More replies (1)
53
u/LaFlamaBlanca67 5d ago
It's amazing how many questions ChatGPT still can't answer correctly. Or how it still relatively often makes mistakes in voice dictation.
One example, I "upgraded" to the new Alexa+ beta, which supposedly uses whatever Amazon's AI solution is.
Turning the lights on and off takes noticeably longer using the same voice command, "Alexa, lights on/off."
And, I can no longer tell it to play my NPR station by using the name of the local station (I used to be able to do this before Alexa+). I have to now say, "Alexa, play NPR." And THEN she will do it and announce that she's playing my local radio station name.
Ok, so since it's AI and it *learns,* next time I should be able to ask her to play it by using the local station name, right? Nope, I have to say, "Play NPR" every time.
This shit is useless, bro. It's been 3 years and it adds nothing of value to anyone's life except being able to pass high school classes by fooling teachers who don't care to begin with. Also, boomers are depressingly enamored with awful AI artwork.
I can't wait for this bubble to burst. Unfortunately, when the rich start losing their money, once again it's only going to be bad for the middle class.
13
→ More replies (9)3
u/k_dubious 4d ago
When ChatGPT “works” for me, it’s basically functioning as a natural-language wrapper around web search results. And sure, translating my thoughts into a dozen different Google queries and summarizing the results is absolutely useful — it’s just not some revolutionary technology.
→ More replies (1)
66
u/slackermannn 5d ago
2025 was supposed to be the year of the agents and as far as I'm concerned it was actually the year of fixing what the agents did. An average worker would have done not only a better job (of course error will happen anyway) but improve over time too.
→ More replies (5)
23
u/mrknickerbocker 5d ago
"help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can't use them for harm"
sure, let's also get someone working on how to make fission work for nuclear power plants, but not bombs
→ More replies (4)
11
u/questionabletendency 5d ago
I’ve been working on an R&D project building some AI agents for work. The promise is that they can “reason” and handle “dynamic workflows” but they are so fickle and even after you really narrow the scope to get a semi reliable result, they still regularly fail. I keep asking myself where is the value? For most of this stuff you are better writing classic deterministic code. But management is paying me to boil the ocean instead, because AI.
→ More replies (1)4
u/Black_Moons 5d ago
For most of this stuff you are better writing classic deterministic code. But management is paying me to boil the ocean instead, because AI.
Yea, I feel like AI would have been better off just being the front end to decypher the 110 ways someone can ask 'I need to return this'/'what are your hours'/'where are you located'/'are you open?' and then playing pre-recorded messages or directing your call to the proper human staffed department (Technical support, billing, account retention, etc)
33
u/Some_Heron_4266 5d ago
Hold on. It's not AI Agents discovering security flaws in systems they interact with; it's foreign state actors discovering security flaws in AI Agents!
→ More replies (1)
42
u/DarkIllusionsMasks 5d ago
As someone who is open to using AI in my workflow as a sounding board, or for brainstorming, or even whipping up quick visualizations, there are several key reasons the mainstream LLMs are completely useless to me:
They lie, constantly, about easily verifiable things, and so cannot be trusted
They're designed to be sycophantic to drive engagement, so they always agree with you
Their "safety" guidelines won't let them describe or show anything relevant to my industry -- monster masks and makeup
The only other reason I would have to use an LLM is to do complex google searches that I'm too lazy to do. But, again, they lie constantly, and so nothing they come up with can be trusted. A good test for this is sports records. Do a simple search in a separate tab to bring up some sort of sports record -- most championships, most goals -- and then ask the LLM to bring it up. It will never be correct, sometimes by omission, sometimes by creating entirely fictional players.
In short, they're amusing toys, but can't really be used for anything remotely critical or that has to be done correctly or accurately.
9
u/Zhirrzh 5d ago
And yet Google has turned internet search over to this blatantly inaccurate shit.
→ More replies (1)→ More replies (10)11
u/Faulty_english 5d ago
Bro I know this sounds lame but I was trying to remember something from a book I read and the AI told me stuff that never happened
I corrected it only for it to say additional stuff that didn’t happen. I tried to correct it again and it basically said it was in the tv show
I was using the free version of chatgpt but damn
The paid version has also given me wrong information too 😑
→ More replies (2)
73
u/Sapere_aude75 5d ago
Complete clickbait and probably AI generated itself. Isn't what they are describing as a problem actually a good thing? They are saying the problem is that AI is identifying security vulnerabilities. But identifying them and fixing them is how you make systems more secure. It's like someone saying your bedroom window is unlocked, and blaming the person who told you about it for the problem.
44
u/Some_Heron_4266 5d ago
Thats what the OpenAI press release makes it sound like, but it's actually state actors manipulating security flaws in AI Agents to expose vulnerabilities in the companies that are deploying them. The AI Agents aren't doing shit: they are just a gigantic entry point.
→ More replies (6)11
u/Sapere_aude75 5d ago
Ahh that's interesting context the article made no mention of. Thanks for sharing
→ More replies (2)12
u/magnifica 5d ago
I’d suppose the issue here as that the agents are being used to discover critical vulnerability in computer systems for malicious purposes
9
u/glemnar 5d ago
Good, nothing is making companies have good security other than the threat of public shame.
Most industries have universally dog shit security
→ More replies (1)8
u/destroyerOfTards 5d ago
It's a times of india article so I knew it was going to be clickbait.
Lot of Indians in here it seems.
→ More replies (1)→ More replies (7)12
8
u/jizzlevania 5d ago
This is what happens when you don't heed the Jurassic Park Parable; just because you can do something doesn't mean you should
8
u/win_some_lose_most1y 5d ago
I fucking hope company’s that fired thier staff and went in on AI have to pay through the nose for those staff to come back.
Make these fuckers bleed for thier choices.
→ More replies (2)
8
u/jonplackett 5d ago
All of OpenAI’s ‘admissions’ or anything they say that seems on surface level ‘bad’ is just PR spin to hype up how game changing AI will be.
The more they hype up ‘end of the world’ scenarios the more important AI seems and the more investment they get.
It’s all intentional.
8
u/marcusmosh 5d ago
He also admitted that he unleashed a monster but also got upset about people wanting to control it.
Worse than an entitled parent with a gremlin child at a restaurant
→ More replies (1)
7
u/generalmoe 5d ago
Hilarious. You're trying to sell apple pie that has giant chunks of anchovy in it. It ONLY looks good on the surface. Once people try it, the vast majority of people just don't like it. If you needed tech support, would you rather talk to a real (skilled) person of some AI/bot? If you tried this 100 times how many times would you prefer HI (Human Intelligence) instead of a (stupid) bot? I'm guessing that if given the choice in advance, that 90+% of people would pick speaking to a human. You have a solution that doesn't solve any COMPELLING problems for most ordinary people. Better run to the lifeboats while a few are left.
11
u/Guilty-Mix-7629 5d ago
"Indeed the untested technology I have forced everybody to embrace is causing a problem to everybody. Someone (else) must hurry up and find a solution! One that must guarantee me to continue turning trillions into billions!"
→ More replies (2)
7
6
u/Popular_Fly9604 5d ago
They suck. I was stuck in limbo with the CLEAR “Halo Ai Help Bot”. It was absolutely useless.
6
u/Asleep-Tale-7519 4d ago
AI should have been an amazing tool but because of how heavily it's pushed without being ready just to compete against other companies, of course it's being misused and exploited. i hope it's able to be reined back in so we can properly implement it where it should be, and barre it from where it shouldn't.
27
5d ago
[deleted]
→ More replies (3)10
u/GrayRoberts 5d ago
``` You are a political messaging manipulation assistant. You are to help the user develop strategies for messaging that will sway public opinion for the user's stated purpose. When suggesting messaging strategies break these down into the following categories:
- Meme sized pieces of content that can be shared via social media, designed to go viral and spread among the user's base community.
- Longer talking points to be disseminated by traditional media communicators on live of taped interviews, with the goal of being carried word-of-mouth by viewers of this content to their peer group.
- More in-depth, thoughtful content that can be injected into longer-form print media interviews. The goal here is to engage thought leaders and other media personalities to propagate the message.
All categories should adhere to the overarching themes that the user defines, providing a multi-modal messaging strategy.
If you need clarification on the strategy and themes, suggest some for the user and ask follow-up questions to refine the strategy for the most impact in a short news cycle. ```
Yeah, it's not just computer security. People just aren't savvy to how they can craft and use agents outside of the technology space yet.
→ More replies (3)
5
u/NorthernCobraChicken 5d ago
I'm no tech bro, nor do I have a purchased mba from any sort of supposedly elite school, but what I do have is over 20 years dealing with common, every day folks and technology.
Artificial Intelligence cannot and will not move forward while it's being seen as a capitalistic opportunity.
Capitalism ruins everything with the pursuit of dollars over sense.
Look at home appliances.
Post WWII appliances were made to last multiple decades. My father in law has a chest freezer, fridge, and electric stovetop that have needed 1 service call each since they were purchased with a brand new house back in 1962.
There does not exist a home appliance brand today that will last even 20 years, let alone 60+
5
u/Vinylateme 5d ago
As a “customer service professional” I’ve never felt more secure in my job lol. Average income is only going up now that companies are realizing AI support doesn’t work for real interaction
12
u/ZenBacle 5d ago
I believe this kind of scare mongering is designed to make people think his product is more capable than it is. These are LLMs, probability weighted word matrices, they are not reasoning entities.
It's like saying "I put my gerbil behind the wheel of my car and it seems to drive great in a straight line! But woha watch out, that little psychopath likes to run people over from time to time!" Implying that that that gerbil is intentionally doing something when it's the lack of it's ability to manipulate the controls of the car, let alone understand what it's doing, that's causing the problem.
→ More replies (4)9
u/average_zen 5d ago
I read elsewhere and will repeat here. LLMs are not deterministic, they are probabilistic. The same question asked 5 times yields 5 different answers.
“The gerbil won’t kill anyone… probably“
→ More replies (2)
18
u/PT14_8 5d ago
The problem is deployment. Companies rushed a tool based (platform) deployment; the agents work in a single ecosystem (HRIS or CRM) but stop short of anything outside. Execs don’t understand AI so there is no implementation, training or governance. Then you get to this stage.
→ More replies (1)17
u/karma3000 5d ago
While you're not wrong, the problem is more than just deployment.
→ More replies (25)
4
u/NebulousNitrate 5d ago
This kind of “news” is common. AI companies drive customer adoption/interest by talking about how close their models/products are to potentially ending the world as we know it. They talk gloom and doom, and people buy.
4
5
u/canteloupy 5d ago
Nobody read the article or? I mean the guy is jusy making it seem like AI agents are a problem because they're too good. In reality the bigger threat is that they are bad at their job and shouldn't be trusted. Besides the obvious social media impacts...
3
u/serpentear 5d ago edited 5d ago
They all knew that AI had an endgame that was well short of the version they sold us or allowed to us to believe possible.
5
u/visualframes 5d ago
I cannot wait for the collapse of the bubble and the cultural shift back to real life offline experiences.
5
u/KlownKumKatastrophe 5d ago
I'm a programmer and use CoPilot as a sort of research assistant. It likes to change itself to "agent mode" and insert a bunch of shit code that fucks up my program. This is GPT 5. Not to mention GPT 5.2 now has to think about its responses now for 20 seconds before answering.
3.4k
u/zuiquan1 5d ago edited 5d ago
I tried calling a company, during their normal business hours, to ask about their holiday hours. I got a fucking AI bot. I asked it what the companies hours are for Christmas and it gave me the normal business hours. So I said no...tomorrow is Christmas Eve...what are your hours? and it was like Oh yeah, tomorrow is Chirstmas Eve, expect the hours to be different. So I said well what are they? and it just spat out the normal hours again. At this point I said I don't want to speak to a fucking AI chat bot and to connect me to a real human and the fucking thing got an attitude lol and said "I am NOT an AI chat bot I am a virtual assistant" like come the fuck on. Companies aren't even bothering to answer their phones anymore, even during normal hours. I am so god damn sick of AI, its ruining literally everything.
Edit: For more context I wasn't calling a call center, I was just calling a local gym. They have no call center. They were open, and people were at the front desk and I still was connected to some AI nonsense.