Articles & Blogs

Deepfake videos are everywhere. So how do we know what’s real?

September 23, 2019 | By Accorian

Remember the phrase “Seeing is believing?” Deepfake videos have people second guessing what they are watching.

Deepfakes are videos manufactured by AI technology that can superimpose someone’s face on another person’s face and manipulate them into saying or doing things that didn’t happen. These videos have been used to spread propaganda on social media networks especially in politics.

Special effect video techniques that were once limited to movie studios and expensive software are now readily available and getting into the wrong hands.

Security experts believe that deepfakes were used by deceptive Facebook groups to influence the 2016 US presidential election in 2016.  As the next presidential election approaches in 2020, companies are working quickly on new technology that can detect deepfakes. 

How did Deepfakes begin?

In 2015, Google released powerful software called Tensorflow that was misused to create Deepfake technology. This software could automatically graft the image of any face onto another face in a video, almost seamlessly.

A user on Reddit used this software to create FakeApp then released it in a Reddit community. This allowed anyone to download the AI software to create Deepfake technology. Reddit has since banned that community but it was too late. This software has been adapted to create FaceSwap and most recently in the viral Chinese app, Zao which can replace the face of movie stars with your own face. 

Deepfake audio is becoming more common

We have to be careful about what we hear. Cyber criminals can used AI audio software to mimic someones voice. In March 2019, Deepfake audio was used to fool the CEO of a large energy firm based in the UK. The CEO thought he was speaking to his boss and unknowingly transferred €220,000 ($243,000) to the hackers. 

As audio software tools become more common, it is likely that more criminals will use them to their advantage. Cybersecurity experts report that there was a 47% increase in voice fraud  between 2016-17.

Racing to build new software

As deepfake creators are constantly changing and adapting their techniques to outsmart new detection software detection software are working hard to keep up.

Recently Facebook spent $10 million to create the Deepfake Detection Challenge where participants develop new technology that will detect a deepfake video.  

Top AI companies designed automatic systems that will analyze videos by assessing the lighting, blinking patterns and comparing the subject’s real-world facial movements.

UC Berkely has been working on new technology that can detect deepfakes but they feel like they are falling behind.  “We are outgunned,” said Hany Farid, a computer-science professor and digital-forensics expert at the University of California at Berkeley. “The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.”

There isn’t a law regulating deepfake as yet but legal technical experts have recommended adapting present laws to cover identity fraud especially when it comes to impersonating a government official.

Photo Via Techblog.com

How can you detect a Deep fake?

While some videos are obviously done for fun like The Steve Buscemi and Jennifer Lawrence Oscar video, experts are concerned about public figures being impersonated to say things that never said.

Paying attention to small details can help you detect a deepfake video. Subjects depicted in a deepfake video blink less often than humans do in real life. Deepfake technology is still struggling to keep up with the frequency and sometimes erratic action of blinking.

Other signs of deepfake video include:

  1. Blurring or change of skin tone towards the edge of the face  
  2. The face might become blurry briefly when a hand or another object passes over the face.
  3. Look closely for double chins or double edges on the face
  4. Eyes may be too close or too far from the edge of the face

Deepfake detection software is available

Reality Defender is an intelligent software that can be run alongside your web browser to detect fake pictures or video. It will scan every image and video on the page and report them as suspected fakes.

XceptionNet is an algorithm that spot the manipulated videos even if the video has been compressed (which usually makes detection difficult)

This system was developed by Andreas Rossler and colleagues at the Technical University of Munich in Germany. Compared to other detection software, it has been labeled as “ahead of the curve.” You can learn more about this software here.

Social media actually helps in the detection of Deepfakes because they compress the video which decreases the quality making flaws more obvious. 

So how can you protect yourself?

While technology is improving, the best advice is to watch the video carefully and judge for yourself. Watch other videos of the subject in the video and see if you detect any unnatural movements. Look at other websites other than social media to get another outlook on the news report. The first step is being aware.

If you have any questions about Deepfake videos or other how we can help your company improve your cybersecurity feel free to contact the security experts at Accorian. 

Recent Blog

Ready to Start?

Ready to Start?​


Drop your CVs to joinourteam@accorian.com

Interested Position

Download Case study

Download SOC2 Guide