The Taylor Swift deepfake is just the tip of an ugly iceberg
Before they were taken down, there were tens of millions of views of deepfake AI images of Taylor Swift. It’s vile stuff. A single poster saw 45 million views before that account was suspended. An investigation shows that most of the images came from a Telegram group “dedicated to making non-consensual AI generated sexual images of women.”
Social media platforms were reasonably quick to take down the images but, of course, the internet is forever and the trolls were immediately all over this. The pictures are still out there, accessible with a quick Google search. Swift is “furious” about this and is looking at legal action.
This isn’t new. A site called Celeb Jihad started posting such images of Swift for years, going as far back as 2011. It’s just that with today’s AI image-generating technology, creating such garbage via face-swapping and something called “undressing apps” is everywhere. In the case of the Swift video, it appears that the Telegram group used a tool created by Microsoft.
It’s estimated that the dozens of websites that host this content offer 200,000 nonconsensual deepfake videos. In 2023, these videos were viewed more than 4.2 billion times. And 96% of the images target women. It’s getting worse and there is an increasing amount of racism contained in these videos.
What can be done? Some jurisdictions have laws against the distribution of deepfakes, giving victims the right to sue. But as for federal and international laws? There’s not much in place.
Back to Taylor for a second. Fans and non-fans alike have come to her rescue. Watch for #ProtectTaylorSwift to trend on X and other platforms.
This, however, comes with its own dangers. Fans could end up doxing the wrong people, subjecting them to harassment they don’t deserve. Meanwhile, the real perpetrators will skate free, changing their profiles to stay one step ahead.
The use of AI for evil is a growing problem. It’s only going to get weirder.
