AI versus reality: the rise of deepfakes and the challenge to authenticity

Call 0345 872 6666


AI versus reality: the rise of deepfakes and the challenge to authenticity

At a recent webinar I attended on the evolution of generative AI and deepfakes, a central theme emerged that technology has advanced at such a fast pace that even sophisticated AI detection tools are unable to keep up. 

Deepfakes are AI generated or digitally manipulated media files so typically videos, images or audio which have been created or modified with artificial intelligence to make them look genuine.  The media file is often manipulated using a type of AI called neural networks, which are systems made up of many connected computers that work together like a human brain. They process information through multiple layers, improving the results each time to make the fake content appear more realistic. Another method uses AI to shrink the media file into a digital code and then rebuilds it to look as close to the original as possible.

One of the key techniques used to create deepfakes is called Generative Adversarial Networks or GANs which involve two AI systems working against each other. One system, known as the generator, creates the fake content and the other system, known as the discriminator, tries to determine its authenticity. The first system keeps improving until the second one cannot tell that the content is fake. At that point, even many expert detection platforms can find it impossible to tell it apart from the real thing.  

Deepfakes are being used in a wide variety of nefarious ways including in the workplace, in schools and particularly in the financial services sector. They present a serious cyber risk, for example criminals can deploy a single deepfake across multiple financial institutions and only one weak link can result in a successful breach causing significant deepfake fraud.

In May 2024, Arup, the British engineering firm behind iconic buildings such as the Sydney Opera House, was the target of a deepfake scam in which a deepfake video call featuring digitally cloned senior officers of the company led to a Hong Kong employee transferring $25million to cyber criminals.

In July 2025, Sam Altman (OpenAI CEO) warned of a “significant impending fraud crisis” due to AI’s growing ability to mimic human voices and videos of individuals, leading to security protocols being bypassed and fraud being committed. He emphasised the need for new verification systems to combat the increasingly complex AI-driven attacks.

A group in the US called the Coalition for Content Provenance and Authenticity (C2PA) has created standards that link digital content to its original source, so users can check where it came from.  Some AI generated images now include invisible watermarks to show they were made by a machine.  Another tool, called The Orb, scans unique human features to create a digital code which helps detect whether a person in an image is real or computer generated. 

There is increasing concern across the globe over how advanced and convincing deepfake technology has become, with such attacks happening more frequently and with a high degree of sophistication.  

Deepfakes also raise serious concerns in the context of commercial litigation where AI generated or manipulated evidence could be used to mislead the court. It was interesting to hear that with advanced platforms struggling to detect authenticity, it often comes down to human judgment and looking at the context of the evidence, the metadata sitting behind it, and the source of the evidence. It also requires practitioners to ask questions such as: is this document too perfect to be real and how critical is it to the matters in dispute? Careful scrutiny and professional instinct remain essential tools.

If you have any queries regarding deepfakes or concerns arising from suspected deepfake content, please contact us on 0330 162 8252. Partner, Rebecca Young and our Media and Reputation Management team have a wealth of experience advising and supporting clients (including individuals and businesses) in relation to online harms.

The majority of our work is privately paying and we will typically require a payment on account of our fees before commencing work. We do not do legally aided work.

Did you find this post interesting? Share it on: