Dangerous deepfakes blur line between real and sham

Update: 2019-06-30 01:45 GMT
(From L-R) US Democrat Nancy Pelosi, Ex-US president Barack Obama and Facebook founder Mark Zuckerberg, have all been the target of deepfake.

Fake news has become passe. Deepfakes are all the rage right now.

But what exactly are they? Deepfakes are videos altered to show people saying things they would have never said. The intention behind creating the videos may be humour, but often it is malicious, aiming to distort or completely change a person’s speech. The worrying thing here is that deepfakes succeed in making the videos seem real. The precision to which the videos are engineered makes it almost impossible to tell them apart from the genuine one, giving them an eerie sense of credibility.

So close to reality are these videos that they have got the Pentagon worried. The US Department of Defense has hired private agencies to spot and bring down deepfake videos circulating online.

Ironically, the latest victim of deepfakes is Mark Zuckerberg, the master puppeteer who sits on top of loads of valuable data of people worldwide.

In a deepfake video uploaded on Facebook-owned Instagram in June, Zuckerberg is seen boasting of being the “master controller”’ and possibly misusing the data he controls.

According to reports, the video was created by an Israeli startup called Canny AI, using an old footage of Zuckerberg, in which his facial expressions and way of talking were imitated to give the video a feel of genuineness. However, the voice in the footage was an actor’s.

Nicolas Cage, Kim Kardashian, Barack Obama, Freddie Mercury, Donald Trump and Marcel Duchamp are some of the many others who have been the target of such deepfakes.

Origin of deepfake

Even though the term has been trending for a few months, the origin of deepfakes can be traced back to December 2017, when a Reddit user first used the term and around the same time pornography deepfakes surfaced.

In pornographic deepfakes, the face of a person was placed on another’s.

Hollywood’s highest paid actress Scarlett Johansson, famous for playing Black Widow in the Avengers movies, is the most targeted when it comes to pornographic deepfakes. There exist numerous fake videos — easily accessible — where the actress’ face has been used

When asked about them, the actress said she can’t stop the internet pasting her face on porn. “I think it’s a useless pursuit, legally, mostly because the internet is a vast wormhole of darkness that eats itself,” Johansson was quoted as saying in an interview to The Washington Post.

Manufacturing ‘original fake videos’

Making a deepfake video is really simple and can be done using readily available online software, which use algorithms to scan faces and superimpose them on the faces in the original videos.

To be more precise and give a clean look to the output, the software is fed numerous videos till it can automatically figure out skin colour and facial expressions. The machine learning, and the subsequent transposition that takes place, make the videos seem flawless.

On April 17, 2018, Buzzfeed uploaded a deepfake video of Barack Obama that showed him saying things that he would never have.

The video that Buzzfeed said it made in 56 hours to spread awareness, was created by a person using FakeApp. Although the video featured Obama, the voice was of comedian Jordan Peele’s.

The video was a hit and soon went viral. Obama, in the fake video, can be seen talking about deepfakes and in the end cheekily says, “stay awoke.”

Implications

All this seems to be fun and harmless until the victim is someone we know or worst ­— ourselves.

Deepfakes are so easy to create that they do not require powerful computers, software or even coding ability. All you need is a free app and some already existing videos of the person you want to imitate.

The circulation of deepfakes along with other videos that pop up casually on social media has deeper implications than most people realise.

Nancy Pelosi, a top US Democrat, was made to look like she was heavily drunk, slurring and commenting about Trump, during an interview. The video, which came out in May 2019, was also shared by Donald Trump’s lawyer. The tweet was shared a few thousand times, before it was declared fake.

But even after the clarification, the video wasn’t taken down from social media.

This is an example of character assassination for the purposes of political propaganda. Deepfakes can also be used for revenge porn and blackmail. The threat is even bigger, as the US, Israel, Europe and Canada are gearing up for elections.

India and deepfakes

No instance of deepfakes targeting Indian celebrities or politicians has surfaced yet. But it is too optimistic to expect India will be spared.

The problem will be even larger in our democracy, which has the highest number of internet consumers in the world, making it so much more dangerous when the lines between reality and lies blur.

As it is, India is struggling to combat fake news. In June last year, AFP reported that nearly 20 people were killed by mobs as a result of circulation of fake kidnapping and lynching news on Facebook and WhatsApp in April-May 2018.

Counter-measures

Deepfakes have started to freak out people who realise this has the potential to cause great damage. As a result, the US in December 2018 introduced its first bill to criminalise the malicious creation and distribution of deepfakes.

According to reports, China too is reviewing possible laws and regulations that can be put in place to curb this menace. Furthermore, some software makers are already creating apps that can differentiate between deepake and original videos. 

Amber, a New York-based company, has started producing software that can do just that. The same principle of machine learning used to create deepfakes is used by Amber to analyse basic movements such as the number of times the eyes blink, the twitching of lips and head tilts to spot the difference.

Human-made menace that adapts

As nations and techies begin to counter deepfakes, the creators of the malicious videos are learning and adapting too.

The manufacturers of deepfakes are reportedly improvising to make the videos smarter in such a way that it evades the tests made by the likes of Amber.

When this deepfake trend will stop, no one knows for sure. But it is certain that it can cause irreversible and irreparable damage to the people targeted and is particularly harmful in the hyper-connected world we live in today.

Tags:    

Similar News