What Are Deepfakes and How Can We Identify Them?

According to a 2023 World Economic Forum report, deepfake videos on the internet are increasing at an annual rate of 900%.

By Ernesto Eimil (El Toque)

HAVANA TIMES – Deepfakes are, for the most part, fake videos or images created using digital software, artificial intelligence, and facial feature manipulation. The term comes from two English words: fake and deep learning.

Deepfakes usually combine images and audio to recreate events, statements, or actions that never happened. These fictional videos tend to be very convincing, making them much harder to detect than other forms of misinformation. For now, most of these videos are short, since they require significant resources and many hours of work.

How Do They Work?

The fundamental concept behind deepfakes is facial recognition technology. More and more, the devices we own are capable of recognizing and recreating human faces, partly due to how users employ them.

Deepfake materials can use a technique called “Generative Adversarial Networks” (GAN) to produce such realistic faces. For example, GAN can analyze thousands of photos of US actor Tom Cruise and generate a new image that looks like those photos, without being an exact copy.

An example of using Deepfake.: “Tom Cruise with his doubles during a filming of the last chapter of Mission Impossible.”

This image, which some users on X believed showed the famous actor and his stunt doubles, is actually fake. According to community notes, the author of the post admitted it in the comments.

As can be seen, the image stands out for looking quite realistic. The technology used to create it can be programmed to map key facial features such as the corners of the eyes and mouth, the nostrils, and the jawline.

What Are the Consequences of Deepfakes?

Although this technology is relatively new, it’s improving and advancing at great speed. The development of deepfakes has moral and political implications and, according to experts, could further weaken the credibility of online media.

That apathy and mistrust toward legitimate and balanced information sources cause many people to begin questioning facts that were once considered unquestionable. The educational website Webwise adds: “If deepfakes make people believe they can’t trust videos, the problems of misinformation and conspiracy theories could get worse.”

Another danger lies in how authoritarian regimes might use this technology. Investigative journalists have pointed to the Venezuelan government, led by Nicolas Maduro, for creating deepfakes in order to present a manipulated image of the country abroad.

Despite numerous examples of manipulation, it’s important to mention that the use of this technology is not inherently bad or necessarily disinformative. Some people make deepfakes while clearly stating in advance that their product is a parody or not real. They can also be used in filmmaking — for instance, in a scene from the Star Wars saga, the face and voice of the late actress Carrie Fisher were digitally placed over those of actress Ingvild Deila.

How Can We Identify Deepfakes?

The good news is you don’t have to be an expert to spot these kinds of videos. Here are some tips you can follow:

  1. Pause and reflect. If you see a video in which a famous or powerful person says something you’d never expect them to say, take a moment. Watch it again. Does it seem normal for that person to make such statements? If you’re unsure, it’s best not to share it.
  2. Verify the source. Do a quick Google search to see if you can find the same news or material on other reliable media outlets. Normally, if it’s real, it will be shared widely by trusted sources.
  3. Look for small details. Blending of facial edges with clothing, mismatched or poorly rendered earrings or jewelry, unrealistic facial hair, perfectly symmetrical faces, odd teeth, or unnatural finger shapes — these are often giveaways. Even though deepfake technology is advanced, it still struggles to reproduce small details.
  4. Watch the mouth closely. If the video is fake, the sound and lip movements may not be perfectly synchronized.
  5. Notice the blinking. Deepfakes tend to blink less than real people and may do so in a forced or unnatural way.

Remember that most misinformation is created to generate doubt and to reinforce preexisting beliefs. And while the technology used to mislead improves quickly, you can train yourself to recognize it — and help stop its spread.

If you need help verifying content, you can contact us through the following channels:
Email: [email protected] n WhatsApp y Signal: +1 786 403-8554

First published in Spanish by El Toque and translated and posted in English by Havana Times.

Read more from Cuba here on Havana Times.

Leave a Reply

Your email address will not be published. Required fields are marked *