As the technology behind generative artificial intelligence (AI) continues to advance, so too does the potential for its misuse. One particularly concerning application of this technology is the creation of deepfakes, which are increasingly being used to spread disinformation online.
With deepfakes becoming not only easier and cheaper to produce but more realistic and harder to determine if they’re fake, the potential for them to be used for malicious purposes is growing rapidly.
In recent months there have been a number of instances of deepfakes have been created using generative AI. Some are done for fun and others are created for more malicious purposes.
Fake news online is already a huge issue which has led to serious concerns about the authenticity of digital media and its impact on public discourse and democracy. With generative AI this trend will only worsen as new AI tools continue to develop and made available to anyone.
This should bring concerns about deepfakes to the forefront of public discussions, and raises serious questions: What is the impact of AI, deepfakes and disinformation, and what is the significance of deepening commercialisation in AI and deepfake technology? We already know that AI will impact PR significantly but as a net benefit and there is an increasing list of AI tools for PR but what about the tools that can be used to cause disinformation?
In this article, we will explore the ways in which generative AI technology is fueling the spread of deepfakes, causing harm to the public discourse and the potential consequences of this trend for our society.
First, it’s best to clarify what generative AI and deepfakes are.
What is Generative AI?
Generative AI is a subset of artificial intelligence, in which a machine is capable of creating new data or content, such as images, sound files, and even digital art. This kind of AI is referred to as “generative” because it can generate new data that is unique and original, as opposed to simply processing or analyzing existing data.
Generative AI systems are designed to learn from patterns and data sets, enabling them to make predictions and create new content that is similar to what they have learned.
This approach can be compared to the way humans learn and create, as it enables machines to work with creative uncertainty and come up with something new. Examples of applications for generative AI include creating unique artworks, generating realistic images and generating text and articles.
The two most popular generative AI models are:
- Transformer-based models — AI such as GPT that uses information gathered from the internet to create text-based content from articles to press releases to whitepapers
- Generative Adversarial Networks or GANs — AI that creates images and multimedia using input from both imagery and text
GANs tend to pose the most risk when it comes to generating disinformation with deepfakes because they can create highly realistic images that can be difficult to tell they were created by an AI.
AI tools used to generate deepfakes
Midjourney is a GAN AI that has been developed for generating high-quality images. This AI algorithm employs a combination of neural networks to create realistic images of objects, people, and even landscapes.
Dall-e from OpenAI is a GAN that can generate unique images from textual inputs. It was named after the surrealist artist Salvador Dali and the movie character Wall-E. Dall-e is trained on a large dataset of images and can generate a wide range of images, from realistic to abstract, based on textual prompts.
Stable Diffusion is a GAN that has been developed to generate realistic images and videos. The key feature of Stable Diffusion is its ability to stabilize the transition between two different states of the image; for example, it can smoothly transition from an image of a person with their eyes closed to an image with their eyes open.
What are deepfakes?
Deepfakes are a form of digital forgery that use artificial intelligence and machine learning to generate realistic images, videos, or audio recordings that appear to be authentic but are actually fake. These manipulated media files are created by superimposing one person’s face onto another’s body or by altering the voice, facial expressions, and body movements of a person in a video.
With the advancements in deep learning algorithms, it has become easier to create deepfakes, which can be used to spread misinformation, propaganda, or to defame someone. Deepfakes can be created using open-source software or customised tools and can be easily spread due to the viral nature of social media.
Examples of deep fakes created by generative AI
In recent months we’ve seen a number of deepfake examples created by generative AI going viral in social media. The imagery created by this technology is so realistic it’s fooled millions of people around the world. Some recent deepfake examples are listed below.
The pope in a Balenciaga-styled puffa jacket
In March 2023, a photo of Pope Francis looking ‘dripped out’ in a white puffer jacket went viral on social media. The 86-year-old sitting looked like he had been given a custom-made puffa jacket by Balenciaga. The image was shared far and wide and covered in numerous publications. But there was just one problem: The image was a deepfake created in Midjourney.
A leaked photo of Julian Assage looking unwell in prison
Again in March 2023, an apparently leaked photo of Wikileaks founder, Julian Assage, was shared far and wide on social media. People who believe the photo was genuine posted their outrage but a German newspaper interviewed the person who created the image who claims he did it to protest how Assange has been treated. Although critics pointed out that creating fake news was not the appropriate method of doing so. Another deepfake example that transcended online to off.
Trump getting arrested
Again in March 2023 (which, looking back, was a big month for deepfake examples), AI-generated images of Donald Trump being arrested were circulating online. This particular deepfake was created by not just one individual but many. This particular deepfake didn’t fool as many of the other two examples but some people were duped and shared them on social media believing they were real.
The Pentagon bombed
In May this year, an AI-generated deepfake image of a bomb at the Pentagon exploding went viral on Twitter and causes US markets to plummet. The S&P 500 stock index fell 30 points in minutes resulting in $500 billion wiped off its market cap. After the image was certified as fake the markets rebounded but it showed the impact that deepfakes can cause. Certified accounts on Twitter didn’t help the situation either as many of them shared the image as if it was real and were rightfully criticised for it.
DeSantis campaign shares deepfakes of Trump
Experts identified the use of AI-generated deepfakes in an attack ad against rival Donald Trump by the campaign endorsing Ron DeSantis as the Republican presidential nominee in 2024.
On June 5th, the “DeSantis War Room” Twitter account shared a video that highlighted Trump’s endorsement of Anthony Fauci, the former White House chief medical advisor and a key figure in the US response to COVID-19. In right-wing politics Fauci has garnered significant opposition, and the intention of the attack ad is to strengthen DeSantis’ support base by portraying Trump and Fauci as close collaborators.
An artist named Justin T. Brown who created AI-generated images of politicians cheating on their spouses to highlight the potential dangers of AI. Brown’s intention was to initiate a conversation about the misuse of AI technology. He shared the images on the Midjourney subreddit, but soon after, he was banned from the platform. Brown expressed conflicting feelings about the ban, acknowledging the need for accountability but questioning the effectiveness of regulating content.
To give you an idea of the incredible creativity in deepfakes, this TED discussion with AI developer, Tom Graham, provides an overview of the existing deepfake technology available and where it’s heading.
Tom’s company, Metaphysic, gained popularity with the release of a fake Tom Cruise video that received billions of views on TikTok and Instagram. They specialise in creating artificially generated content that looks and feels like reality by using real-world data and training neural nets. This is more accurate than VFX or CGI and helps create content that appears natural.
One of their examples includes transporting the singing voice of a woman singing in Spanish to Aloe Blacc’s face, making it look and sound as if he’s singing in Spanish. This technology could eventually allow anyone to speak any language naturally, and creating such content will become easier over time.
Metaphysic is also capable of processing live video in real-time, which is at the cutting edge of AI technology. They demonstrate this by replacing the interviewers face with Chris’s in a live video, and even replicating the voice. They can apply this technology to anyone, as demonstrated with Sunny Bates in the audience.
How to combat AI-generated deepfakes
Efforts are being made to develop technologies to detect and prevent deepfakes, but their effectiveness remains limited as the technology continues to evolve rapidly.
One way to combat AI-generated deepfakes is through the development of advanced detection technologies. These technologies can analyse patterns in audio and video data to identify signs of manipulation. Another approach is through the creation of a digital watermarking system that can verify the authenticity of media content.]
Google has recently launched a new tool called ‘About This Image’ to help people spot fake AI images on the internet. The tool will provide additional context alongside pictures, including details of when the image first appeared on Google and any related news stories. This new feature will help people identify hyper-realistic pictures from the real ones, including those generated using tools such as Midjourney, Stable Diffusion, and DALL-E.
The tool is intended to surface news stories about fake images that have been subsequently debunked and is designed to tackle warnings that new AI tools could become a wellspring for misinformation and propaganda.
Perhaps the best way to combat AI-generated deepfakes is to educate the public about their potential harm which may be crucial in preventing their spread. It is important to be vigilant when consuming media, verifying its source and contextual information, and using critical thinking when interpreting its contents. With a multifaceted approach, we can deter the spread and harm caused by AI-generated deepfakes.
The future of generative AI and deepfakes
The future of AI and deepfakes is a controversial topic. The genie is out of the bottle and the technology is only going to get more realistic. Soon it won’t be just deepfake images but deepfake videos too. Voice cloning technology has already made significant progress, and there is no doubt that it will advance further in the coming years.
This raises serious concerns about the potential misuse of deepfake technology, from political propaganda to personal vendettas. Deepfakes have already been used to create fake pornographic videos, causing harm to the individuals involved.
While there are efforts to develop countermeasures to detect and prevent the spread of deepfakes, it will be a constant battle between the creators and those who aim to stop them.
As the technology advances, the lines between reality and fake will become increasingly blurred, making it more critical than ever to develop measures to identify and combat the spread of deepfakes.