AI Deepfakes: Catch Me If You Can

With the Deepfake industry growing larger and much smarter each day, there is a very wide concern as to how to catch Deepfakes. Is it possible? In many cases, it is. But the human-machine combines are scarily tough.

author-image
Sujit Chakraborty
New Update

One of the reasons that almost everyone is concerned about Deepfakes is that almost anyone is susceptible to it. The deep fake industry does not care who they fake, so long as the video goes viral and rakes up huge viewership. And this is related to the sites that pay for the numbers.  

YouTube is one of them. Facebook another. But get this clear: it is not YouTube or Facebook that is generating AI Deepfakes. Such platforms make their money from the advertisers who pay for the largest possible outreach of the advertisements of their products or services. 

A part of that revenue is shared by these platforms with the video maker. But that does not incriminate the platforms. It is just that with AI coming in, people are manipulating the algorithms to create anything that is sensational, anything that is likely to go viral and rake in the numbers. 

According to a Scientific American (SA) podcast from March last year, there are really two kinds of AI Deepfakes. The first is the one manipulated totally by the machine alone. Researchers at MIT Media Lab in the US have found that these are more or less catchable. 

But the enormity of the task is that according to a seminar report from the US, 65 per cent of all social media videos are fake 

They say that if there is a video of a person looking directly at the camera, then one should catch the lip-sync. Because the face of a person is used as the base of a Deepfake, and it is not the real person at all, it is clear that the words that s/he are made to say, obviously by someone else’s voice, there will be a lag between the words and the movements of the lips that the words are purportedly coming out from.  

In an article in the New York Times, Cade Metz writes that In July 2018, two of the world’s top artificial intelligence labs unveiled a system that could read lips. 

“Designed by researchers from Google Brain and DeepMind — the two big-name labs owned by Google’s parent company, Alphabet — the automated setup could at times outperform professional lip readers. When reading lips in videos gathered by the researchers, it identified the wrong word about 40 percent of the time, while the professionals missed about 86 per cent. 

“In a paper that explained the technology, the researchers described it as a way of helping people with speech impairments. In theory, they said, it could allow people to communicate just by moving their lips.” 

Cade Metz writes “As many of the leading A.I. researchers move into corporate labs like Google Brain and DeepMind, lured by large salaries and stock options, they must also obey the demands of their employers. Public companies, particularly consumer giants like Google, rarely discuss the potential downsides of their work.” 

You can click on this link to check how this can be caught:

Are You Better Than a Machine at Spotting a Deepfake? • Science, Quickly (spotify.com)

However, there are videos in which a person is moving and walking. If that video has been made only by the machine, then there is need to look for the shadows and check that if, say the light source is from the left, then the shadow should logically fall on the left of the person. But if that is not the case then it is easily a fake.  

The real problem is with fakes that involve the machine and a person or persons. Those making the video make the algorithmic fake and then check out such mistakes and remove or correct them. 

According to the SA podcast, this takes more than 24 hours of working on the machine made fake. So if you feel a video is fake, you can take the help of such sites as FactCheck.org. 

I found something more interesting in the BBC (The Forum - A deep dive into deepfakes - BBC Sounds) 

This programme is named The Forum and is dedicated to tackle the entire range of Deepfake issues. The most interesting thing is that if you tag on to it, you can go live with the researchers and talk to them. This link has several experts, and the first person who can be heard is a person from Srinagar, Jammu Kashmir.

This man (name was not given) said he is not a programmer, but finds AI extremely interesting because it makes his job of designing websites incredibly easier. 

And yet, this same gentleman says he is terrified by the Deepfakes. So this just shows that AI per se is one of the most extraordinary scientific inventions, but just as medicines can be misused, so can this most advanced technology. 

Of late, there has been considerable research on discerning the real from the fake, and if you to spend more time studying it and you could find substantial material at this link:

Overview ‹ Detect Deepfakes: How to counteract misinformation created by AI — MIT Media Lab

Most importantly, it will be very good for those involved in elections that are coming up, if you could try this seminar that was exclusively devoted to elections and misinformation where the two experts of D’Angelo Gore and Matt Groh. 

(Coming up: The Regulation Issue in Deepfakes)