With the growing popularity of malicious deepfakes, we are indeed living the emergence of a new generation of misinformation. Now that the deepfakes are becoming increasingly sophisticated and accessible allowing malicious individuals and organizations to create and disseminate deceptions, while our society is not equipped to distinguish between the real and the fake, let alone to act with discernment against this phenomenon.
Deepfake, also called Hypertrucages, are a form of trick that has been gaining ground on social media. While technology can be interesting, it also raises fears of the worst in some ways. Hypertrucages use audio and video recordings of real people saying and doing things they have never said or done, created using machine learning algorithms.
On the side of good use of the technology, for example, the Dalí Museum has managed the feat of synthetically reviving Salvador Dalí so that he can greet visitors at his exhibition and take a selfie, or self-portrait, with them.
But also, on the side of the malicious use of the technology, a woman used this same technology to create fake videos of her daughter’s rival cheerleaders to bully them.
Another alarming example is the use of deepfake phishing to cyber-attack businesses. In one high-profile example from 2019, cybercriminals used deepfake phishing to trick the CEO of a U.K.-based energy firm into wiring them $243,000, according to The Wall Street Journal. Using AI-based voice spoofing software, the criminals successfully impersonated the head of the firm’s parent company, making the CEO believe he was speaking with his boss.
Deepfake phishing attacks can either take a real-time attack shape: where the deepfake audio or video is so sophisticated that it tricks the victim into believing the person on the other end of a call is who they claim to be.
Or a Non-real-time attack shape: where a cybercriminal impersonates someone via deepfake audio or video messages that they then distribute through asynchronous communication channels, such as chat, email, voicemail or social media. This type of communication reduces the pressure on criminals to respond believably in real-time, letting them perfect a deepfake clip before distributing it.
In short, today, anyone can create their own fake news, prove it with a deepfake video or audio and spread it for whatever purpose. That being said, the use of deepfake in disinformation is not new, so why should we now be more concerned about the emergence of this technology?
- The rise of this phenomenon now forces us to doubt the veracity of any audiovisual content. Denials are becoming more and more credible and the liar’s dividend more powerful. Anyone can easily question irrefutable facts.
- Moreover, as we have explained before, videos are particularly effective in triggering emotional reactions because we generally consider the video to be irrefutable proof of veracity.
There are currently many debates about the best solutions at this time to face this issue, we believe education has the advantage of addressing the problem at its source and building resilience in our society, to inoculate it so that it has a better radar for deepfakes.
We are working hard to bring you the latest fact-checked information and tools. Donate every time you read disinformation and the money will be used to pay a fact-checking ad!
Make a one-time donation
Make a monthly donation
Make a yearly donation
Choose an amount
Or enter a custom amount
Your contribution is appreciated.
Your contribution is appreciated.
Your contribution is appreciated.DonateDonate monthlyDonate yearly