What has happened to Taylor Swift and does the law protect victims of deepfakes?

Call 0345 872 6666


What has happened to Taylor Swift and does the law protect victims of deepfakes?

Media reports suggest that music artist Taylor Swift is considering legal action following the spread of AI generated “deepfake” sexually explicit images purporting to be Taylor Swift emerged last week. The images became widely available across the internet, and according to the New York Times, one image was viewed 47 million times before the posting account was suspended by X (formerly Twitter) on Thursday.

In a media statement X said “Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them.”

In this article, we take a look more generally the emergence of deepfakes and how they may be used.

What is a “deepfake”?

Awareness of “deepfakes” broke in 2017 when pornographic images began circulating online where they had been doctored to have the faces replaced with well-known public figures and celebrities faces. Many of those initial deepfakes were fairly obvious to be a fake. Since then, advances in editing software and AI technology has allowed this to become much more sophisticated, as the AI technology is used to give the impression that the person depicted in the video is doing or even saying what is in the video. AI technology has developed so that we are seeing ever more increasingly realistic videos. More recently, other well-known individuals have been the victim of deepfakes including politicians and celebrities.

There are also audio versions of deepfakes. Voice synthesis can be used to replicate someone’s voice and the voice is then manipulated using “text to talk” software.

There are currently no specific civil laws against deepfakes in the UK. Regulation in this area is new and relatively untested. This means that legal options for a victim of a deepfake are still limited.

The Crown Prosecution Service has confirmed their lawyers will be able to prosecute perpetrators of deepfakes who share images and videos of their victims. What is less clear is what can be done to prevent deepfakes from being made.

Online Safety

The Online Safety Act makes companies that operate a wide range of online services responsible for keeping people safe online. OFCOM has the responsibility for implementing the codes of conduct that companies will be required to adhere to. More information about this is explained in a previous JMW blog.

The Online Safety Act makes non-consensual deepfake pornography illegal in the UK. Since this implementation of the Online Safety Act is relatively new, OFCOM, the UK regulator of broadcasting and telecommunications remains in the stages of consulting on how this shall take practical effect. The consultation will be in relation to illegal content and pornography, and with a focusing on protecting women and girls.

OFCOM has previously explained that in relation to AI regulation will be required to be dynamic and fast paced as the questions OFCOM are required to answer as the regulator evolve.

Data protection

In parallel, the UK regulator of data protection, The Information Commissioner’s Office (ICO), has commenced a consultation on the use of AI technology. The consultation will close on 01 March 2024. The ICO hopes that the consultation will help to provide a data controller or data processor with certainty on their obligations to safeguard information rights and freedoms.

An individual’s image sits within the definition of ‘personal data’ as it is information that relates to an identified or identifiable individual.

It is unlikely that someone who wishes to make a deepfake about a celebrity would seek consent first. It has been argued elsewhere that ‘legitimate interest’ might be a reason to process personal data, but legitimate interest presupposes that data would be used how people would expect and to have minimal privacy impact. On the face of it, it would seem unlikely there is a legitimate interest in doctoring a celebrity’s image or anyone else into a scenario they were not involved with to give the wrong impression of the situation.

We will need to await what the ICO’s consultation will conclude and whether they will ultimately take the same approach as the Online Safety Act by placing the regulation and monitoring responsibility on the tech companies. How any enforcement action or legal claims might be handled, remains to be seen.

Talk to us

JMW’s Media and Reputation Management team have a wealth of experience advising and supporting clients (including individuals and businesses) in relation to online harms. You can contact the team by calling 0345 872 6666 or by completing our online enquiry form.

Did you find this post interesting? Share it on:

Related Posts