It seems crazy to me but ive seen this concept floated on several different post. There seems to be a number of users here that think there is some way AI generated CSAM will reduce Real life child victims.
Like the comments on this post here.
https://sh.itjust.works/post/6220815
I find this argument crazy. I don’t even know where to begin to talk about how many ways this will go wrong.
My views ( which are apprently not based in fact) are that AI CSAM is not really that different than “Actual” CSAM. It will still cause harm when viewing. And is still based in the further victimization of the children involved.
Further the ( ridiculous) idea that making it legal will some how reduce the number of predators by giving predators an outlet that doesnt involve real living victims, completely ignores the reality of the how AI Content is created.
Some have compared pedophilia and child sexual assault to a drug addiction. Which is dubious at best. And pretty offensive imo.
Using drugs has no inherent victim. And it is not predatory.
I could go on but im not an expert or a social worker of any kind.
Can anyone link me articles talking about this?
Boy this sure seems like something that wouldn’t be that hard to just… do a study on, publish a paper perhaps? Get peer reviewed?
It’s always weird for me when people have super strong opinions on topics that you could just resolve by studying and doing science on.
“In my opinion, I think the square root of 7 outta be 3”
Well I mean, that’s nice but you do know there’s a way we can find out what the square root of seven is, right? We can just go look and see what the actual answer is and make an informed decision surrounding that. Then you don’t need to have an “opinion” on the matter because it’s been put to rest and now we can start talking about something more concrete and meaningful… like interpreting the results of our science and figuring out what they mean.
I’d much rather discuss the meaning of the outcomes of a study on, say,
AI Generated CSAM's impact on proclivity in child predators
, and hashing out if it really indicates an increase or decrease, perhaps flaws in the study, and what to do with the info.As opposed too just gesturing and hand waving about whether it would or wouldn’t have an impact. It’s pointless to argue about what color the sky outta be if we can just, you know, open the window and go see what color the sky actually is…
I love your enthusiasm for research but if only it were that easy. I’m a phd researcher and my field is sexual violence. It’s really not that easy to just go out and interview child sex offenders about their experiences of perpetration.
[This comment has been deleted by an automated system]
While I agree that studies would help, actually performing those studies has historically been very difficult. Because the first step to doing a study on pedophilia is actually finding a significant enough number of pedophiles who are willing and able to join the study. And that by itself is a tall order.
Then you ask these pedophiles (who are for some reason okay with admitting to the researchers that they are, in fact, pedophiles) to self-report their crimes. And you expect them to be honest? Any statistician will tell you that self-reported data is consistently the least reliable data, and that’s doubly unreliable when you’re basically asking them to give you a confession that could send them to federal prison.
Or maybe you try going the court records/police FOIA request route? Figure out which court cases deal with pedos, then figure out if AI images were part of the evidence? But that has issues of its own, because you’re specifically excluding all the pedos who haven’t offended or been caught; You’re only selecting the ones who have been taken to court, so your entire sample pool is biased. You’re also missing any pedos who have sealed records or sealed evidence, which is fairly common.
Maybe you go the anonymous route. Let people self report via a QR code or anonymous mail. But a single 4chan post could ruin your entire sample pool, and there’s nothing to stop bad actors from intentionally tainting your study. Because there are plenty of people who would jump at a chance to make pedos look even worse than they already do, to try and get AI CSAM banned.
The harsh reality is that studies haven’t been done because there simply isn’t a reliable way to gather data while controlling for bias. With pedophilia being taboo, any pedophiles will be dissuaded from participating. Because it means potentially outing yourself as a pedophile. And at that point, your best case scenario is having enough money to ghost your entire life.
deleted by creator
Very good comment all around, I just have a nitpick to this section:
Lastly, there’s a very troubling thing I’ve noticed the majority isn’t willing to talk about: there are so, so many people out there who are attracted to kids. Not prepubescent kids, but very few 14 to 16 year old girls will not have had men approach them with sexual comments. The United States of America voted against making child marriage illegal. The amount of “I’ll just fuck this behaviour out of her” you can find online about Greta Thunberg from even before she was an adult is disturbing; people with full name and profile pictures on Facebook will sexualise and make rape threats to a child because she said something they didn’t like. There’s a certain amount of paedophilia that just gets overlooked and ignored.
Even worse, those people aren’t included in research into paedophilia because of how “tolerated” it is. The ones that get caught and researched are the sickos who abuse tens or hundreds of children, but the people who will marry a child won’t be.
This is actually called hebephilia/ephebophilia which is in the general public treated very similarly and often subsumed under the term pedophilia. It is considered it’s own thing though. To quote Wikipedia:
Hebephilia is the strong, persistent sexual interest by adults in pubescent children who are in early adolescence, typically ages 11–14 and showing Tanner stages 2 to 3 of physical development.[1] It differs from pedophilia (the primary or exclusive sexual interest in prepubescent children), and from ephebophilia (the primary sexual interest in later adolescents, typically ages 15–18).[1][2][3] While individuals with a sexual preference for adults may have some sexual interest in pubescent-aged individuals,[2] researchers and clinical diagnoses have proposed that hebephilia is characterized by a sexual preference for pubescent rather than adult partners.[2][4]
My guess for why it is more tolerated than straight up pedophilia is that they have reached a more mature body, that shows some/most properties of a sexually developed person. So while it’s still gross and very likely detrimental to the child if pursued (depends on the age in question, 16-18 is pretty close to adulthood), there seems to be more of an understanding for it.
[This comment has been deleted by an automated system]
I do think attraction to pubescent kids is more tolerated than paedophilia because of the extra “adultness”, but that doesn’t make it any more right
Being attracted to a pre-puberty or early-puberty child is not only considered wrong because they can’t consent, it’s also considered abnormal because they do not share any features of what a “normal” person would be attracted to, namely developed physical sexual traits. I don’t think there is anything being muddied here.
The physical attraction part gets muddier the more puberty progresses. There isn’t really an age limit for this as puberty works differently for everyone. The psycological/consent part gets muddier the more the age progresses combined with the changes puberty does to your personality, but it also depends on a ton of other factors, like the kind of upbringing in terms of sex-ed. There is a reason that the age of consent differs vastly even between US states and even more so internationally, even if you only include western europe.
A 12/14/16 year old kid is still just that, just a kid, no matter how much they’ll think they’ve grown up.
So this might be your opinion, many other people would say otherwise, it’s not a hard fact. Especially if you go up to 16 where we allow people of this age to do all sorts of things. In USA you can drive a car, in germany you can buy and consume alcohol, they are sometimes already in an apprenticeship to get into a job. People generally start becoming people and stop being kids somewhere in that range.
So while bringing this distinction up muddies the water, it muddies the water only so far as it is already muddy, and this needs to be part of the conversation if it should have a relation to reality.
In the end, the problem is the same: an adult is attracted to someone who can’t possibly consent, and the only way they’ll get what they desire is through abuse.
So in conclusion I don’t fully agree here. It’s not the same, one is way worse than the other. That doesn’t make it ok to get what you want through abuse from a 16 year old or wherever you want to set the age limit. Or from anyone for that matter, but younger people need to be better protected, because typically they are easier to abuse. Where that age limit is exactly, is somewhat a matter of opinion, as the different laws show.
That’s so fucked up that anyone thinks that enablement is a genuine means of reduction here…
Go check out how many downvotes i got on that post i linked.
deleted by creator
The way I see it, and I’m pretty sure this will get downvoted, is that pedophiles will always find new material on the net. Just like actual, normal porn, people will put it out.
With AI-generated content, at least there’s no actual child being abused, and it can feed the need for ever new material without causing harm to any real person.
I find the thought of kiddie porn abhorrent, and I think for every offender who actually assaults kids, there are probably a hundred who get off of porn, but this porn has to come from somewhere, and I’d rather it’s from an AI.
What’s the alternative, hunt down and eradicate every last closeted pedo on the planet? Unrealistic at best.
There is no such thing as generated CSAM.
That’s the entire point of calling it “CSAM.”
If you still want those images treated the same - fine. But treating them the same will never make them the same.
It is fundamentally impossible to abuse children who do not exist.
It cannot “further victimize the children involved,” because there are no children involved.
We are talking about drawings.
Fancier drawings than with pen and paper, yes - but still made-up images. Renderings. Hallucinations. They depict things that did not actually happen. They can depict persons who have never lived. They can involve characters who are not human. Please stop lumping that together with photographic evidence of child rape.
So having depictions of cartoon Jews being killed in gas chambers or cartoon black people enslaved and depicted as such is not harmful, because the Jews and black people in those cartoons aren’t real, so they weren’t harmed and thus it is entirely okay to do?
You cannot simply ignore the context of a thing existing. Historical, cultural or legal something like this would be very wrong and very harmful due to the messages it sends. There is a good reason why plenty of countries don’t just band actual CSAM but depictions of it as well, because it normalizes and makes harmless of a thing that is anything but that.
Arrest the guy who did Maus, I guess.
Unless you can tell the difference between depicting bad things and endorsing them in real life.
But that is just the thing. Depictions of CSAM very often suggest that the victims are enjoying what is happening. This isn’t about a caricature or a satire type art piece trying to hold up a mirror. OP argues against generated CSAM, because it is or would be used to satisfy urges. In this particular case, the depiction equates to an endorsement. I doubt you can be like “Hey, get your rocks off to this, but remember, its bad! But have it anyway, we’re not endorsing this though.”
If you think all porn says ‘do this in real life,’ I have terrifying news about some stuff involving adults.
Adults can consent. Children cannot. Big difference.
Adults can also be raped, and there’s plenty of porn depicting that.
Yes, but adults can actually consent to being in such scenes and there are laws in place that are aimed to prevent actual rape occurring (whether or not these laws are effective or effectively enforced isn’t the question and there’s probably a lot to be done there still to ensure the safety of actresses and actors). Actual crimes in that way should be prosecuted and aren’t okay either. A depiction of CSAM cannot depict any legal scenario at all, ever, because children are incapable of consent. Having depictions that help normalize or suggest that it is okay, is harmful.
I agree with you, I saw people on twitter once talking about this. Pretty disgusting to even consider.
I’m just gonna put this out here and hope not to end up on a list:
Let’s do a thought experiment and be empathetic with the human that is behind the predators. Ultimately they are sick and they feel needs that cannot be met without doing something abhorrent. This is a pretty fucked up situation to be in. Which is no excuse to become a predator! But understanding why people act how they act is important to creating solutions.
Most theories about humans agree that sexual needs are pretty important for self realization. For the pedophile this presents two choices: become a monster or never get to self realization. We have got to accept that this dilemma is the root of the problem.
Before there was only one option of getting a somewhat middleway solution: video and image material which the consumer could rationalize as being not as bad. Note that that isn’t my opinion, I agree with the popular opinion that that is still harming children and needs to be illegal.
Now for the first time there is a chance to cut through this dilemma by introducing a third option: generated content. This is still using the existing csam as a basis. But so does every database that is used to find csam for prevention and policing. The actual pictures and videos aren’t stored in the ai model and don’t need to be stored after the model has been created. With that model more or less infinite new content can be created, that imo does harm the children significantly less directly. This is imo different from the actual csam material because noone can tell who is and isn’t in the base data.
Another benefit of this approach has to do with the reason why csam exists in the first place. AFAIK most of this material comes from situations where the child is already being abused. At some point the abuser recognises that csam can get them monetary benefits and/or access to csam of other children. This is where I will draw a comparison to addiction, because it’s kind of similar: people doing illegal stuff because they have needs they can’t fulfill otherwise. If there is a place to get the “clean” stuff, much less people would go to the shady corner dealer.
In the end I think there is an utilitarian argument to be made here. With the far removed damage that generating csam via ai still deals to the actual victims we could help people to not become predators, help predators to not repeat, and most importantly prevent or at least lessen the amount of further real csam being created.
Except there is a good bit of evidence to show that consuming porn is actively changing how we behave related to sex. By creating CSAM by AI, you create the depiction of a child that is mere object for the use of sexual gratification. That fosters a lack of empathy and an ego centric, self gratifying viewpoint. I think that can be said of all porn, honestly. The more I learn about what porn does to our brains the more problematic I see it
I agree with this.
The more I learn about what porn does to our brains the more problematic I see it
And I agree with this especially. Turns out a brain that was/is at least in part there to get us to procreate isn’t meant to get this itch scratched 24/7.
But to answer your concern: I will draw another comparison with addiction: Giving addicitive drugs out like candy isn’t wise just as it wouldn’t be wise to give access to generated csam to everyone. You’d need a control mechanism so that only people that need access get access. Admitedly this will deter a few people from getting their fix from the controlled instances compared to the completely free access. With drugs this seems to lead to a decrease of the amount of street-sold drugs though, so I see no reason this wouldn’t be true, at least to some extent, for csam.
I’m an advocate of safe injection sites, so I will agree somewhat here. Safe injection sites work because they identify addicts and aggressively supply them with resources to counteract the need for the addiction in the first place, all while encouraging less and less use. This is an approach that could have merit for pedophiles, but there are some issues that pop up with it as well that are unique- to consume a drug, the drug must enter the body somehow, where it is metabolized.
CSAM on the other hand, is taken in simply by looking at it. There is no “gloves on” approach to generating or handing the content without absorbing it- the best that can be hoped for is have it generated by someone completely ‘immune’ to it, which raises questions about how “sexy” they could make the content- if it doesn’t “scratch the itch” the addicts will simply turn back to the real stuff.
There is a slim argument to be made that you could actually create MORE pedophiles through classical conditioning by exposing nonpedophilic people to erotic content paired with what looks like children. You could of course have it produced and handled by recovering/in treatment pedophiles, but that sounds like it defeats the point of limited access entirely and is therefore still bad, at least to the ones in charge of distribution.
Additionally, digital content isn’t destroyed upon consumption like a drug, and you have a more minor but still real problem of content diversion, where content made for the program is spread to those not getting the help that was meant to be paired with it. This is an issue, of course, but could be rationalized as worth it so long as at least some pedophiles were being treated.
Yes there are a lot of open questions around this, especially about the who and how of generation, and tbh it makes me a bit uncomfortable to think about a system like this in detail, because it will have to include rating these materials on a “sexyness” scale which feels revolting.
deleted by creator
[This comment has been deleted by an automated system]
You make a very similar argument as @Surdon and my answer is the same (in short, my answer to the other comment is longer):
Yes giving everyone access would be a bad idea. I parallel it to controlled substance access, which reduces black-market drug sales.
You do have some interesting details though:
Training a model on real CSAM is bad, because it adds the likeness of the original victims to the image model. However, you don’t need CSAM in your training set to generate it.
This has been mentioned a few times, mostly with the idea of mixing “normal” children photos with adult porn to generate csam. Is that what you are suggesting too? And do you know if this actually works? I am not familiar with the extent generativ AI is able to combine these sorts of concepts.
As far as I can tell, we have no good research in favour of or against allowing automated CSAM. I expect it’ll come out in a couple of years. I also expect the research will show that the net result is a reduction in harm. I then expect politicians to ignore that conclusion and try to ban it regardless because of moral outrage.
This is more or less my expectation too, but I wouldn’t count on the research coming out in a few years. There isn’t much incentive to do actual research on the topic afaik. There isn’t much to be gained because of the probable reaction of the regulators, and much to lose with such a hot topic.
[This comment has been deleted by an automated system]
It’s not even an idea, it’s how you get CSAM out of existing models
I didn’t know this was a thing tbh. I knew that you could get them to generate adult porn or combine faces with adult porn. Didn’t know they could already create realistic csam. I assumed they used the original material to train one of the open models. Well that’s even more horrifying.
It’s possible the concept is never addressed, but I don’t think there’s any way to stop the spread of CSAM once you no longer need to exchange files through shady hosting services.
Didn’t even think about that. Exchanging these models will be significantly less risky than exchanging the actual material. Images are being scanned by cloud storage providers and archives with weak passwords are apparently too. But noone is going to execute an AI model just to see if it can or cannot produce csam.
[This comment has been deleted by an automated system]
AI CSAM is not really that different than “Actual” CSAM
How do you not see how fucking offensive this is. A drawing is not really different from a REAL LIFE KID being abused?
It will still cause harm when viewing
The same way killing someone in a video game will cause harm?
And is still based in the further victimization of the children involved.
The made up children? What the hell are you talking about?
Some have compared pedophilia and child sexual assault to a drug addiction
No one sane is saying actually abusing kids is like a drug addiction. But you’re conflating pedophilia and assault. When it’s said pedophilia is like a drug addiction, it’s non offending pedophiles that is being discussed. Literally no one thinks assaulting kids is like a drug addiction. That’s your own misunderstanding.
Can anyone link me articles talking about this?
About what exactly? There’s 0 evidence that drawings or fantasies cause people to assault children.
I don’t get it, it seems many people want to condemn all forms of child porn, seemingly to avoid downvotes, because for some reason the internet community can’t see that AI generated images don’t harm anyone.
deleted by creator
I’m not sure what you mean by ‘defending’ pedophiles. They have a right to exist, and to feel validated in their attraction (which they do not control), but no right to have sex with children.
deleted by creator
I would certainly condemn the killers. But you’re right, I feel a large segment of the online population wouldn’t.
deleted by creator
I’m so sorry, that’s such a sad story.
how are the drawings made
Removed by mod
[This comment has been deleted by an automated system]
because it doesn’t happen if there isn’t evidence
._.
Yeah. People are way too hung up on there being evidence of stuff. It just FEELS right to you, right?
How shocking. The furry defining pedophilia and cartoons of people fucking kids.
Lol ok. No actual response to legitimate points that were not “defending” pedophiles, and I will 100% defend all cartoons. But just “lol furry pedophile”. Typical
How is this an unpopular opinion?
People like that are pedo apologists and the fact that they’re not being banned from the major Lemmy instances tells us all we need to know about those worthless shitholes.
deleted by creator
The way I see it is like this, Pedos, like everybody else, have sexual urges, and like most people, they will try to do something about them… but they can’t for obvious reasons. Since the stigma against pedophilia is so great, a lot them are too afraid to come out and get help. Therefore, they tend to feel repressed. They’re in a situation where they can’t do what they want and they’re too afraid to get help, and so they try to bottle things up until they snap.
When that happens, they tend to snap in one of two ways. The first way is by trying to seek out CP online and the second is to actually molest a child. Both of these options are terrible because the former generates demand for more CP content to created, and thus puts more children in abusive situations, and the latter is just straight up child rape. I think a lot of them understand the risk of doing either, but they do them anyway either because they don’t care either or because they can’t help themselves. However, there might be a solution for at least some of them.
If we assume that a certain portion of pedophiles are too afraid to do anything but too repressed not to, then providing them with a legal outlet could push at least some them away from harming actual kids. That’s where I see AI generated CP come in. I see it as an outlet, and I don’t think it’s anything new either. We’ve had the lolicon/shotacon shit for awhile, which is basically just drawn CP. As disgusting as it is, it hasn’t resulted in any major spikes in child sexual assault rates as far as I am aware. Therefore, if fake CP, including AI generated CP, doesn’t use any actual CP to generate the images and they can keep pedos away from kids then I don’t see the issue with it. Sure, it is gross, but we do have to remind ourselves that we don’t ban thing on how gross they are. The reason why we ban CP in the first place isn’t because it’s gross, but because it actually harms the kids mentally and physically. If AI generated CP can keep any pedos from harming kids then I see that as a win.
To those, who say “no actual children are involved”:
What the fuck the dataset was trained on then? Even regular art generators had the issue of “lolita porn” (not the drawing kind, but the “very softcore” one with real kids!) ending in their training material, and with current technology, it’s very difficult to remove it without redoing the whole dataset yet again.
At least with drawings, I can understand the point as long as no one uses a model or is easy to differentiate between real and drawings (heard really bad things about those doing it in “high art” style). Have I also told you how much of a disaster it would be if the line between real and fake CSAM would be muddied? We already have moronic people arguing “what if someone matures faster that the others”, like Yandev. We will have “what if someone gets jailed after thinking their stuff was just AI generated”.
Even regular art generators had the issue of “lolita porn” ending in their training material
Source? I’ve never heard of this happening. I feel like it would be pretty difficult for material that’s not easily found on clearnet (where AI scrapers are sourcing their training material from) to end up in the training dataset without being very intentional.
It was on Twitter by an anti-AI group. Xon’t have the link anymore.
What the fuck the dataset was trained on then?
I’m pretty sure if you show an AI regular porn and regular pictures of children, it will be able to deduce what child porn looks like without any actual children being harmed.
Even in that scenario, it would be fuckjng creepy, since actual kids are still being involved.
It being creepy and it doing harm are different things right?
deleted by creator
Your statement is ‘i don’t know what I’m talking about but I have strong options’ that’s understandable but if we really care about harm reduction then it has to be an evidence based and science backed policy.
I have no idea what the right thing to do is but I want whatever helps mitigate risk and harm.