A Telegram user who advertises their services on Twitter will create an AI-generated pornographic image of anyone in the world for as little as $10 if users send them pictures of that person. Like many other Telegram communities and users producing nonconsensual AI-generated sexual images, this user creates fake nude images of celebrities, including images of minors in swimsuits, but is particularly notable because it plainly and openly shows one of the most severe harms of generative AI tools: easily creating nonconsensual pornography of ordinary people.
That’s a ripoff. It costs them at most $0.1 to do simple stable diffusion img2img. And most people could do it themselves, they’re purposefully exploiting people who aren’t tech savvy.
I have no sympathy for the people who are being scammed here, I hope they lose hundreds to it. Making fake porn of somebody else without their consent, particularly that which could be mistaken for real if it were to be seen by others, is awful.
I wish everyone involved in this use of AI a very awful day.
Imagine hiring a hit man and then realizing he hired another hit man at half the price. I think the government should compensate them.
Hitman hires hitman who hires hitman who hires hitman who hires hitman who tells police - Oct ‘19
Nested hit man scalpers taking advantage of overpaying client.
it’s a “I don’t know tech” tax
The people being exploited are the ones who are the victims of this, not people who paid for it.
There are many victims, including the perpetrators.
It seems like there’s a news story every month or two about a kid who kills themselves because videos of them are circulating. Or they’re being blackmailed.
I have a really hard time thinking of the people who spend ten bucks making deep fakes of other people as victims.
Your lack of imagination doesn’t make the plight of non-consensual AI-generated porn artists any less tragic.
deleted by creator
Writing /s would have implied that my fellow lemurs don’t get jokes, and I give them more credit than that.
deleted by creator
Some people just don’t have a sense of humor.
And those people are YOU!!
Thanks for the finger-wagging, you moralistic rapist!
My sarcasm detector is between 8.5-9.5 outta ten.
Missed it this time, FWIW!
No one’s a victim no one’s being exploited. Same as taping a head on a porno mag.
That’s like 80% of the IT industry.
IDK, $10 seems pretty reasonable to run a script for someone who doesn’t want to. A lot of people have that type of arrangement for a job…
That said, I would absolutely never do this for someone, I’m not making nudes of a real person.
Scam is another thing. Fuck these people selling.
But fuck dude they aren’t taking advantage of anyone buying the service. That’s not how the fucking world works. It turns out that even you have money you can post for people to do shit like clean your house or do an oil change.
NOBODY on that side of the equation are bring exploited 🤣
In my experience with SD, getting images that aren’t obviously “wrong” in some way takes multiple iterations with quite some time spent tuning prompts and parameters.
Wait? This is a tool built into stable diffusion?
In regards to people doing it themselves, it might be a bit too technical for some people to setup. But I’ve never tried stable diffusion.
It’s not like deep fake pornography is “built in” but Stable Diffusion can take existing images and generate stuff based on it. That’s kinda how it works really. The de-facto standard UI makes it pretty simple, even for someone who’s not too tech savvy: https://github.com/AUTOMATIC1111/stable-diffusion-webui
Img2img isn’t always spot-on with what you want it to do, though. I was making extra pictures for my kid’s bedtime books that we made together and it was really hit or miss. I’ve even goofed around with my own pictures to turn myself into various characters and it doesn’t work out like you want it to much of the time. I can imagine it’s the same when going for porn, where you’d need to do numerous iterations and tweaking over and over to get the right look/facsimile. There are tools/SD plugins like Roop which does make transferring over faces with img2img easier and more reliable, but even then it’s still not perfect. I haven’t messed around with it in several months, so maybe it’s better and easier now.
It depends on the models you use too. There’s specific training models data out there and all you need to do is give it a prompt of “naked” or something and it’s scary good at making something realistic in 2 minutes. But yeah, there is a learning curve at setting everything up.
Thanks for the link, I’ve been running some llm locally, and I have been interested in stable diffusion. I’m not sure I have the specs for it at the moment though.
An iPhone from 2018 can run Stable Diffusion. You can probably run it on your computer. It just might not be very fast.
By the way, if you’re interested in Stable Diffusion and it turns out your computer CAN’T handle it, there are sites that will let you toy around with it for free, like civitai. They host an enormous number of models and many of them work with the site’s built in generation.
Not quite as robust as running it locally, but worth trying out. And much faster than any of the ancient computers I own.
And mechanics exploit people needing brake jobs. What’s your point?