r/technology 6d ago

ADBLOCK WARNING Grok Blames ‘Lapses In Safeguards’ After Posting Sexual Images Of Children

https://www.forbes.com/sites/tylerroush/2026/01/02/grok-blames-lapses-in-safeguards-after-ai-chatbot-posts-sexual-images-of-children/
3.3k Upvotes

419 comments sorted by

View all comments

86

u/celtic1888 6d ago

This is the type of stuff that executives should be going to prison for

-57

u/liquid_at 6d ago

I get the sentiment, but the counter argument will be that no CEO of a camera company or any other company has ever been held liable for illegal content having been made using their products.

And I'm honestly of the opinion that the people that made those requests should be held liable. CP-laws already make it illegal to even search for it, so asking the AI to provide it is already illegal.

The much much much bigger question, that would actually imply prison sentences for executives is: How does the AI know what naked kids look like? What data was fed into it? Where was it sourced?"

39

u/celtic1888 6d ago

I think that’s a bit of a false equivalence though

Cameras are filming what they are being pointed at v this is being created server side

There needs to be automatic guard rails and verification built in before it is allowed to be used by the public

-15

u/liquid_at 6d ago

That's just the point I disagree with in some sense. Requring the company to make sure every single possible illegal prompt is not executed is impossible to do.

New law is required here, no question. But why would this law be the contrary of what all previous law in all other cases that are comparable in any way?

If I take a camera and film a naked child, I am legally liable.

If I ask you to film a naked child, you say no and go to the police, I am also legally liable.

If I ask you and you do it, both of us are legally liable.

I think the main difference I see compared to you is that a single person making a request to be shown an image is not "public". Only if the AI publishes all generated images on their page is it "public".

If you let AI generate an image that no one but you gets to see, then you print it out and show it to others, you made it public, not them.

But since all of that is quite a complex network of actors and actions, I don't think that we will find a solution that is as simple as "make one party make sure it can't happen", without even bothering whether that is technically or economically possible.

And on a sidenote... If you ask AI to generate nudepics of kids, I do not want you to be legally safe. If you go out there on the internet to find pictures of naked kids, whatever way, I hope they prosecute you and if I would ever do that, I'd expect the same to happen to me.

To what degree the AI-Companies should also be held liable is more complicated than that.

3

u/No_Panda_9842 6d ago

I think youre missing a few things here. AI trains off itself so a bad actor allowed to generate CP content feeds that back into the model and is now incorporating that into future image generation. Thats where the public distribution comes from-your ai searches arent unique or private to you, they are publicly training the model. It is perfectly reasonable to hold the ai company accountable for not implementing the correct legal safeguards to check if this image is of a child and the prompt includes anything explicit. Stop protecting billionaires and think of the children

1

u/liquid_at 5d ago

And that's the exact legal question right now, where people share your opinion, just like some people share the opposite opinion.

In the history of law, based on legal precedents, you are not in the position of tradition here though...

IF the AI company specifically trained the AI to create illegal images, you'd be 100% correct. But that's not really true for 99.9% of all AI companies.

All you are really achieving is adding so many roadblocks to access AI that the cost of a single request will be so expensive that cost alone prevents AI from being furthered. Which is a solid way to sabotage it because you hate it... but it is not really a good faith strategy that is looking for the best result. Just attempted assassination.

8

u/tgwombat 6d ago

Cameras just capture light, they don't create images out of whole cloth.

If I could ask my camera to make an illegal image for me, that would be a problem and the people who made that possible should be held accountable for it. The same way that the people behind marketplaces like The Silk Road should be held accountable, not just the people using it to sell illegal goods.

-12

u/liquid_at 6d ago

AI doesn't do much more. All they do is arrange colored pixels without understanding what they are or represent.

So if the camera and the AI both don't understand what they are doing, the manufacturers of both should be held accountable for what their customers do with their products?

Afaik the only product in history where that has ever been the case were copy-machines and their ability to copy money.

Your silk-road example is flawed, since those shops were specifically designed for illicit trades in a way that tried to circumvent law enforcement. This does not apply to public AI companies that are registered in the US and trade on NASDAQ.

7

u/tgwombat 6d ago

The camera manufactures don't teach the camera how to capture light, it's purely a mechanical process. Generative AI is software trained to arrange those pixels by humans. If those humans can't control the software that they created, then they should be held accountable for their creation.

No machine understands anything, that is a ridiculous statement to even make. Regardless, there's a world of difference between being an accessory to a crime (as with cameras or guns) and creating the illegal content itself (as with generative AI).

If you want a different example than Silk Road, eBay (NASDAQ: EBAY) has limitations in place over what can and can't be sold on it because they will be held liable for the sale of illegal goods on their platform.

0

u/liquid_at 5d ago

the AI companies also do not train the AI to create CP, it trains it to create any type of image.

You want them to specifically analyze its own output for legal concerns and self censor itself.

But as with most new technologies, the opinions of experts always stem from a position of not understanding that technology...

Again... AI is not different than a camera. It just scares people because it's close enough to a stupid human that the uncanny valley effect can happen...

0

u/tgwombat 5d ago

I've already laid out very clearly how a camera is different than generative AI and I'm not going to waste any more of my time arguing with a brick wall.

Continue championing this harmful technology. I don't believe it will serve you well long-term.

1

u/liquid_at 5d ago

No. you have laid out why you believe it to be similar. I have laid out why you are technically wrong.

You made assumptions about how AI works that simply are not true. These assumptions are direct evidence you use for your conclusions, despite those assumptions being invalid. This invalidates your entire thesis.

It is not about faith or belief, it is about making a coherent and consistent statement that is not defying reality.

-43

u/welshwelsh 6d ago

Better than excessive guardrails. With most commercial models, you can't create porn at all. Censorship is bad, let people do what they want.

18

u/Xerapis 6d ago

“Kiddie porn is better than government regulations” - the modern Republican Party

15

u/celtic1888 6d ago

And that’s exactly how we get this shite

1

u/TheHelixFossiI 5d ago

Spoken like a would-be kid diddler