fbpx

Campaigning for facts

Deepfake: a New Study Found That People Trust “AI” Fake Faces More Than Real Ones

DEEPFAKE

People cannot distinguish between a face generated by Artificial Intelligence – using StyleGAN2- and a real face, researchers found, who are calling for safeguards to prevent “deepfakes”.

Dr Sophie Nightingale from Lancaster University and Professor Hany Farid from the University of California, Berkeley, conducted experiments in which participants were asked to distinguish state-of-the-art StyleGAN2 synthesized faces from real faces and what level of trust the faces evoked.

The results revealed that synthetically generated faces are not only highly photo realistic, but nearly indistinguishable from real faces and are even judged to be more trustworthy.

AI” LEARNS THE FACES WE LIKE: The fake faces were created using generative adversarial networks (GANs), AI programs that learn to create realistic faces through a process of trial and error.

IS IT POSSIBLE TO IDENTIFY AN “AI” GENERATED FACE? THISPERSONDOESNOTEXIST.COM

The AI face generator is powered by StyleGAN, a neural network from Nvidia developed in 2018. GAN consists of 2 competing neural networks, one generates something, and the second tries to find whether results are real or generated by the first. Training ends when the first neural network begins to constantly deceive the second.

An interesting point is that the creation of photographs of non-existent people was a by- product: the main goal was to train the AI to recognize fake faces and faces in general. The company needed this to improve the performance of its video cards by automatically recognizing faces and applying other rendering algorithms to them.

It is almost impossible to recognize an image of a fake person. AI is so developed that 90% of fakes are not recognized by an ordinary person and 50% are not recognized by an experienced photographer. There are no services for recognition. Occasionally, a neural network makes mistakes, which is why artifacts appear: an incorrectly bent pattern, a strange hair color, and so on.

The only thing you need to do is take a closer look: humans’ visual processing systems are far stronger than computers’, so it is possible to recognise forgery by detection.

Jevin West and Carl Bergstrom created a website called “Which Face Is Real”, which is focused on teaching people to be more analytical of potentially false portraits. Before making suggestions that a person in a photo is existent, there are several things that need to be considered. One of the most common ones is symmetrical issues, in particular eyeglasses and earrings.

On the other hand, last summer Facebook researchers say they’ve developed artificial intelligence that can identify “deepfakes” and track their origin by using reverse engineering.

DEEPFAKE USAGE & RISKS

The issue of deepfakes is an important and difficult one. Like many other types of harmful content, it is adversarial in nature and will continue to evolve and no single organization can solve these challenges on its own. The danger we are facing with the spread of “AI” generated deepfakes varies between spreading misinformation, inspiring misunderstanding, fear, or disgust to creating false narratives of people.

AI” ETHICAL CHALLENGES

Using AI responsibly is the “immediate challenge” facing the field of AI governance, the World Economic Forum says.

In its report, The AI Governance Journey: Development and Opportunities, the Forum says AI has been vital in progressing areas like innovation, environmental sustainability and the fight against COVID-19. But the technology is also “challenging us with new and complex ethical issues” and “racing ahead of our ability to govern it”.

The report looks at a range of practices, tools and systems for building and using AI.

These include labeling and certification schemes; external auditing of algorithms to reduce risk; regulating AI applications, and greater collaboration between industry, government, academia and civil society to develop AI governance frameworks.



We are working hard to bring you the latest fact-checked information and tools. Donate every time you read disinformation and the money will be used to pay a fact-checking ad!


One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

5.00 CHF
15.00 CHF
100.00 CHF
5.00 CHF
15.00 CHF
100.00 CHF
5.00 CHF
15.00 CHF
100.00 CHF

Or enter a custom amount

CHF

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly
One-Time
Monthly
Yearly

Make a one-time donation

Make a monthly donation

Make a yearly donation

Choose an amount

$5.00
$15.00
$100.00
$5.00
$15.00
$100.00
$5.00
$15.00
$100.00

Or enter a custom amount

$

Your contribution is appreciated.

Your contribution is appreciated.

Your contribution is appreciated.

DonateDonate monthlyDonate yearly

Leave a Reply

%d bloggers like this: