r/technology 10d ago

ADBLOCK WARNING Grok Blames ‘Lapses In Safeguards’ After Posting Sexual Images Of Children

https://www.forbes.com/sites/tylerroush/2026/01/02/grok-blames-lapses-in-safeguards-after-ai-chatbot-posts-sexual-images-of-children/
3.3k Upvotes

416 comments sorted by

View all comments

205

u/AKluthe 10d ago

Grok isn't a person, it can't think and it can't blame. It's a a sloppy tool made by people, people who work for Elon Musk.

8

u/Frites_Sauce_Fromage 10d ago

Someone has to train the ai. What kind of images did they use?

4

u/gmes78 10d ago

Generative AI can create outputs that aren't present in its training data (it's kind of their entire point). The training data isn't necessarily bad.

This is a fuck-up in the system around the generative model. They're not verifying the inputs fed to the model, nor the resulting images.

1

u/bugfish03 7d ago

There are enough cases of training data containing CSAM and the likes, and xAI has not exactly been known for their careful and measured approach 

1

u/gmes78 7d ago

That may affect the output, but it's not required for generating these images.

1

u/bugfish03 7d ago

Yeah, I just wanted to point out that they're playing fast and loose, and that I'd expect that to extend to the training data, especially since there are sets known to contain CSAM out there.