Swift in numbers

Since we are living in the Taylor Swift zeitgeist and Faster Networks has been on a break from normal proceedings it seems apt we revisit a time in the summer when the internet broke. It was late in January when a new collection of deepfakes flooded the internet and X recorded 47 million clicks on one sexually explicit Taylor Swift image. Just to recap, a deepfake is an edited cut of a visual image (or audio or both), using original material to subvert information or create new content. Deepfakes were originally used to describe non consensual pornographic content that used recognisable facial images attached to the body of someone else. It’s a disturbing, fraudulent and disgusting practice designed to intimidate, shame, harass and assault victims, who are overwhelmingly women.

AI technology, the software that deepfakes is born from, is a trigger to privacy and copyright issues, Faster Networks wrote about that here. In that same article we discussed the need for a pause in progressing AI technology before proper testing and especially before unleashing on and for the masses. There was a open letter created by Future of Life and Encode Justice in March 2023 that was petitioning for big tech to take a 6 month sabbatical to reflect, to let the current technology settle, to make way for adaptations and give policy and legislation the shortest time to catch up. Faster Networks believe the opposite happened, Wired agrees that technology players too scared of being left behind ramped up their investment, focus and marketing.

Deepfakes are not new but what is different now than in 2017, when they first made their debut, is that anyone has access to the technology and it is indistinguishable from the real thing.  There are many arguments to be made against the sadistic use of AI software as a way to abuse, manipulate, harass, deceive and coerce individuals or communities. The Taylor Swift pornographic deepfakes are an example of how the social media is lost from it’s original mission –  a safe destination and haven to share information and build connection and communities.  

The AI technology being used to create deepfakes has been available since 2017 but it is in recent years that deepfakes have become indistinguishable from the real thing, the unbelievable turning believable. Faster Networks have previously discussed the use and misuse of images and words and it took literally 5 minutes before OpenAI’s chat generator was creating new content that was harmful. Harmful in the way that audio visual material can be weaponised to make people believe untruths and then act on that information, trauma created from trauma. Let’s take Taylor Swift out of the picture for a minute and think about the way children’s faces and bodies are being used to train AI generators to feed a global beast. 

The Minefield podcast talked about the morality of deepfakes, not only the creation of, but the sharing of images that denigrate a person and allude to the individual doing things that don’t align with their values. The seed was planted many years ago when Taylor Swift was memefied and then commandeered by the alt-right via 4chan. This time around the images left the dark corners of the underbelly chat app, Telegram, and were allowed to circulate on mainstream commercial channels despite their “zero tolerance” policy of posting nonconsensual pornographic images. It is virtually impossible to have a post removed quickly on any grounds.  Remember when Elon Musk sacked almost all of the content watchdogs in 2022? When these Taylor Swift images were posted on X it took the Swifties (Taylor Swift fanclub) almost an entire day to have the poster lose his account. They did that by banding together and flooding X and the internet with authentic images of Taylor Swift performing and reporting accounts that were posting the explicit images. 

The real harm of deepfakes is manyfold, from an individual level there is untold trauma that reverberates for decades. No other public figure in the world has the support that Taylor Swift enjoys from her fanbase to help curb the harmful attempts of actors on the internet willing and able to create sadistic content. The experience can be isolating and scary no matter the fame and notoriety. Society, however, has a much bigger problem and that is the continued erosion of trust. When deepfakes are released into the world, we are all being hacked.