Let’s be honest: the first time I saw photos of Angela Merkel and Barack Obama having a good time on the beach, I thought they were real. After a few zoom-ins I figured out something was wrong, and although I did manage to catch the fake, it would have likely been harder to spot with video and audio. This was the moment I realised that it was time for me to dive into the world of deepfakes. 

A definition from Reddit.

Deepfake is the most popular term that defines AI-generated human content. For an image or a video to be classified as a deepfake, it needs to represent a natural person. Surprisingly, the term was only coined in 2017 by a Reddit user who created realistic fake videos using AI. Since then, we have experienced a blast of new platforms and applications that help us swap faces, replace voices, and modify videos. This goes hand in hand with the rising number of deepfakes on the internet. It is estimated that there will be over 500,000 fakes circulating on social media in 2023. The number is set to surge in 2024 with the proliferation of ChatGPT and next year’s EU and US elections. Although it is worth mentioning that the EU might put a break on these numbers with its signature AI act. Where deepfakes are classified as “high risk to the health and safety or fundamental rights of natural persons.”

The practicalities.

Many of us have generated or received a deepfake at some point (yes, FaceApp and Instagram face swap filters count). But the real deal is made on high-performance desktop computers. In the case of videos, an algorithm called an “autoencoder” is often employed. This algorithm consists of an encoder and decoder that analyse and compress data, such as accents, facial expressions, or postures. Once ready, the data is translated into an image. Another method involves a more advanced machine learning technique called Generative Adversarial Network (GAN). It is made of a source (the “persona” to be imitated) and a generator (the “deepfake” itself). Both systems are connected to a “discriminator”, acting as a filter to identify deepfakes. The images undergo repeated scrutiny by the discriminator until the deepfake is indistinguishable from real content. Once this is achieved, the final version is produced.

Beyond scammers and cyberthreats

Deepfakes are not limited to scammers or cyberwarfare: they are being increasingly used by companies, NGOs, and even political parties as a means of communication. For instance, this year saw the release of the first official AI-generated political campaign video on Twitter. As deepfake technology advances and the cost of production decreases, it poses a significant security threat. However, regulation and awareness in this area is still lacking, partly due to the difficulty in detecting bogus content. In fact, 43% of global consumers admit to being unable to identify deepfakes, making us all vulnerable to their impact.

How can it affect you?

In a world of deepfakes, protecting your reputation takes a whole new meaning. Let’s assume a deepfake video of your company’s CEO is circulating the internet. It goes viral and journalists are lining up with questions. What would you do? In an ideal scenario, your communications team will have the right training and a crisis manual in place. However, this is not always the case. If you are not sure about whether you can tick the “I am crisis ready” box, we are here to help. At SEC Newgate EU, we have an experienced, AI-savvy, issues and crisis team that can operate across borders and sectors to mitigate issues and rebuild your reputation. Even better, we help your company assess the level of your crisis preparedness by organising real life crisis simulations to help your team identify any emerging issues early and ensure your crisis protocol is up to par before any crisis hits.  AI is not the farfetched “terminator” future anymore. It is here to stay, so maybe it is time to consider ticking that infamous crisis readiness box.