Methods to counteract disinformation as deepfake ran rampant
Methods to counteract disinformation as deepfake ran rampant
Deepfake and cheap fake videos have become increasingly common. When used irresponsibly, they allow disinformation to spread quickly, posing significant threats to individuals and organizations.
In Taiwan, synthetic media has also caused widespread public concern, especially after the arrest in 2021 of a Taiwanese YouTuber suspected of creating and selling deepfake pornographic videos featuring more than 100 public figures.
The Taiwan FactCheck Center (TFC) recently spoke to National Cheng Kung University statistics professor Hsu Chih-chung (許志仲) and assistant professor of the Research Center for Information Technology Innovation Chen Chun-cheng (陳駿丞) about ways to spot and identify a deepfake or cheap fake.
Deepfakes versus cheap fakes
A deepfake is a video that has been altered through some form of machine learning to generate human bodies and faces, whereas a cheap fake is an audiovisual manipulation created with cheaper and more accessible software, such as Photoshop.
Due to the advent of social media and the internet, both kinds of audiovisual manipulation can now be spread at unprecedented speeds, and are increasingly being deployed in disinformation campaigns to discredit public figures and celebrities.
Celebrity deepfake, GAN
Several deepfake examples that recently went viral on the internet include one about Tesla and Twitter CEO Elon Musk, who allegedly admitted to being high on drugs, and another about Morgan Freeman telling Will Smith off for slapping Chris Rock on TikTok. TFC has debunked the deepfake Elon Musk video.
Image: A Deepfake of Elon Musk went viral online.
“Deepfakes require a large amount of image and video data to train machine learning to create realistic images and videos,” Chen said.
While public figures such as celebrities may have a large number of videos and images available online, they provide rich sources for machine-learning algorithms, he added.
The technology involving Generative Adversarial Networks (GAN), a deep-learning-based generative model, has greatly improved in recent years to create realistic fake photos or videos.
Although deepfake technology has improved, awareness of the many arenas in which deepfakes are present has also increased, Hsu said.
However, as technologies improve, the time to train a machine-learning model will also decrease, hence deepfakes are bound to become more prevalent, he said.
Deep fake video of Elon Musk promoting crypto scam from BleepingComputer.com on Vimeo.
There are currently several useful deepfake detection tools, such as DeepWare, which are available to users for free on the internet.
According to the statistics professor, many online tools usually cannot tell if a video is 100 percent real or fake.
“This is because there are different processing techniques for deepfakes,” Hsu said, and that the public should view the results as reference only.
How to spot a deepfake
The experts, who provided four proven techniques, urged members of the public to help stop the proliferation of disinformation from the misuse of this technology by learning to identify what’s real and what’s not.
Besides deepfake detection tools, there are still flaws in the production of synthetic media, and they can be easily discernible even to the naked eye, they said.
1. Check for blurry background
Look closely at the background to see any blurring.
2. Distorted facial contours
Unusual skin tones and distorted facial contours indicate that the video might be fake. Also, check if the movements of the individual are choppy and distorted from one frame to the next.
3. Jagged pupils
In humans, the pupils are round, while those in computer-generated deepfake videos or images have more jagged edges and are less symmetrical.
4. Unnatural facial expressions
Check for unnatural mouth and lip movements to detect lip synchronization. The video may be a deepfake if the individual’s face doesn’t display the emotion that should go along with what he or she is saying especially when facial morphing or image stitching can be detected.
Fostering media literacy
While numerous free software tools have been developed for detecting deepfakes, members of the public can still spot a deepfake using the naked eye.
However, as technology continues to advance, deepfakes will become even harder to spot, Hsu warned.
“If people are not vigilant, they can easily be misled by a deepfake,” he said, and that raising the digital literacy of the public is an effective strategy to combat disinformation.
When seeing is no longer necessarily believing, becoming more aware of deepfakes or artificial intelligence-related issues is the best way to identify and decipher synthetic media, Chen said.
You may find it interesting:
TFC’s fact-check on the deepfake of Elon Musk being high on drug