DEEP FAKES SERIES / TECH-ISSUES / PORTENTS / WHITE PAPER – 2
“Sujit, please do not put your daughter’s pictures on Facebook. We treasure our daughters, and you cannot take a chance with how someone can manipulate a girl’s photograph,” a former colleague had requested me.
That was a decade ago when the impossibilities that have suddenly become reality today, with the bizarre deployment of an otherwise revolutionary technology – Artificial Intelligence (AI) ‑ had not even been dreamt of.
Now my daughter is 15, and rather on the prettier side, and I shudder to think what could have happened.
Look at the video that surfaced featuring actress Rashmika Mandanna’s facial likeness morphed over that of British-Indian social media personality Zara Patel. Or the lurid video portraying Indian actress Alia Bhatt
When friend Puja had warned me, my daughter was just five years old, and the worst that technology 10 years ago had gotten to was morphing the pictures of small girls to turn out child porn. But today, it is the age when, as the iconic thriller novel by Alistaire McLain, was titled, Fear is the Key.
It is field day for the makers of such Deep Fakes, and from the likes of Indian stars Mandana or Bhatt, they spare none, right down to the Ukrainian president, asking his soldiers to surrender to Russia!
So, do just calculate the destructive power of the technology’s abusers, had the rank and file of an entire nation believed that Deep Fake video and surrendered: Ukraine would have been wiped out as a nation.
So, what is Deep Fake?
Deepfakes are hyper-realistic video, audio, and images created by algorithms and today among the latest technological developments in the area of artificial intelligence.
Simply understood, deepfake is AI that seeks to deceive. It refers to an artificial intelligence-driven technology that uses machine learning algorithms or, more specifically, what are known as Generative Adversarial Networks (GANs) that produce synthetic media.
GANs are a class of deep learning architectures—deep learning is a subfield of machine learning that makes use of algorithms inspired by the complex framework and functionality of the human brain, referred to as artificial neural networks.
How does this work?
These networks are employed to effectively process and analyse big data sets. It’s a field of deep learning that has seen extensive application across various domains, including computer vision, natural language processing, speech recognition, and robotics.
GANs employ two neural networks, namely a generator and a discriminator, to undergo training on a given dataset. The objective is to generate fresh synthetic data that closely resembles the characteristics of the original data.
…and there are “shallow fakes…
Tampering with personal prestige and social standing of prominent national and global figures – from writers of film stars, to presidents of countries ‑ has been a very old human vice, and it now comes with massive tech-support.
Shallow fakes or cheap fakes are multimedia that has been manipulated using techniques that do not involve machine/deep learning, but are in many cases still as effective as the more technically sophisticated techniques. Thus called cheap fakes or shallow fakes, and mostly generated through the manipulation of an original message conveyed in real media.
These can be
• Selectively copying and pasting content from an original scene to remove an object in an image and thereby change the story. In the first part of this series (The Menace of Deep Fakes In 2024), we had mentioned how a public speech by Union Home Minister had been cut up to make him seem to say that “all of the educational qualifications of Prime Minister Narendra had been fake. (And we shall return to this later in the series.)
• The slowing down of a video by adding repeat frames to make it sound like an individual is blabbering wrong word, pronounced making a person look like a blabbering clown. The supposed videos of Modi trying to pronounce “biodegradable’ was one such example.
Deepfakes, however, take this to an altogether different and dangerous level: These are media that have either been created (fully synthetic) or edited (partially synthetic) using some form of machine/deep learning (artificial intelligence). Some well-known examples include:
• In 2022, LinkedIn experienced a huge surge in deepfake images for profile pictures.
• In May 2023, an AI-generated scene that is the product of AI hallucination—made up information that may seem plausible but is not true—depicted an explosion near the Pentagon and was shared around the internet causing general confusion and turmoil on the stock market.
• A deepfake video showed Ukrainian President Volodymyr Zelenskyy telling his country to surrender to Russia.
Contrarily, many Russian TV channels and radio stations were hacked and a purported deepfake video of Russian President Vladimir Putin was aired claiming he was enacting martial law due to Ukrainians invading Russia.
• Another example is Text-to-Video Diffusion Models, which are fully synthetic videos developed by AI. In 2019, deepfake audio—yes! audio can be deep faked too! with voice-skins or voice-clones of public figures was used to steal $243,000 from a UK company.
• Openly accessible Large Language Models (LLMs), are now being used now to generate the text for phishing emails.
Nuances of the Nuisance
These tech misuses can have layers of nuances. And here is one that can have terrible effects.
I have seen scores of videos that show – and name – several persons claiming that all the idols for the upcoming Ram Mandir have been created by Muslim idol makers.
Now let me make it clear: that I have no personal ire if some Muslims had indeed done that. That would be a highly secular development.
Indeed, I know for sure of the many Durga Pujas in Bengal and at least one in Odisha that are fully conducted by Muslims.
I know that for the last 13 generations, Muslims belonging to Sindh province (now in Pakistan) are serving as priests of the Durga temple which is located at Bageria village of Bhopalgarh Tehsil in Jodhpur district. Just as I also know from having watched a video of a 5,000-year-old Shiva Temple complex being preserved by the Pakistan government.
But the release of such a potentially incendiary post or a video just ahead of the communally sensitive event waited for decades by millions of Hindus could have an extremely serious fallout.
And the coming election year will see undoubted proliferation of Deep Fake political lies, and from all quarters.
*This series is based on a White Paper developed by The Processor Intelligence Unit
(Coming up: The Tech And Its Nuances At The Hustings)