OpenAI collapses media reality with Sora AI video generator | If trusting video from anonymous sources on social media was a bad idea before, it’s an even worse idea now::Hello, cultural singularity—soon, every video you see online could be completely fake.
Another stepping stone to a much worse world. We won’t know what is real anymore.
I think it’s very cool technology, but in the hands of governments and psyops, it’s going to brainwash entire countries.
Want another 9/11? Sure no problem. Blow up a building, tell people you have some random video of what happened, captured by civilians…place evidence in locations where it will be found.
We already don’t know what is real. This will only make that clearer.
I think some governments already had tech like this but not all.
It will be interesting to follow this. Probably lots of fake videos on YouTube as a consequence where events are not real but used to stir up aggression.
Maybe, but I doubt it, only because traditional propaganda has been %100 effective without generative AI.
Photoshop has existed for a long time. Three Letter Agencies have been faking stuff forever. Not new.
Will this make it easier/faster? For sure. The one upside I can see is it brings the conversation to everyone, even those folks who don’t want to acknowledge government is as bad an actor as anyone else.
Of all the things, this really scares me. Many people scroll through their socials so quickly they will definitely not be able to tell apart generated clips from real ones. And the generated ones will only get better. One generation later, nobody will believe anything they see on a screen. And no, I don’t think regulation can do much here as it will only end up in heavily censoring everything, leading to more distrust in media.
I think it could end up being a good thing if it causes social media to collapse into smaller, better known social groups.