AI Deepfakes: A Techno-‘Covid’ Pandemic

Like COVID-19, AI Deepfakes can attack and annihilate anyone… politicians, film stars, even ordinary people. So how do we, as in the case of Covid, develop a regimen, to arrest the further spread of the scourge?

author-image
Sujit Chakraborty
New Update
DEEP FAKE PT-5.jpeg

AI DEEPFAKES / CAMPAIGN SERIES / GLOBAL SCOURGE / REGULATION ISSUES 

The AI Deepfake industry is becoming a runaway rogue that can destroy anyone or anything, including other industries and businesses and particularly endanger political personages.  

As they seek to build consensus and establish the required checks and balances that would act as a bulwark against the rapidly rising deepfake horde, regulators and policymakers around the world face a complex proposition.  

deepfakes can become positive force-multipliers. For technology companies, the global creative economy, the healthcare sector, consumers, and even entertainment sectors.  

What does the law say? Ian Sample, science editor of The Guardian, in his piece, “What are deepfakes – and how can you spot them?”  writes in January 2020: “Deepfakes are not illegal per se, but producers and distributers can easily fall foul of the law. Depending on the content, a deepfake may infringe copyright, breach data protection law, and be defamatory if it exposes the victim to ridicule.”  

“There is also the specific criminal offence of sharing sexual and private images without consent, i.e. revenge porn, for which offenders can receive up to two years in jail.  

“In Britain the law is split on this. In Scotland, revenge porn law includes deepfakes by making it an offence to disclose, or threaten to disclose, a photo or film which shows or appears to show another person in an intimate situation. But in England, the statute carefully excludes images that have been created solely by altering an existing image”. 

If this doesn’t tell us about the predicament lawmakers around the world are facing today, nothing can.  

What is indeed clear is that the world must focus on better research and collaborations between governments—especially with countries that already have stronger regulatory mechanisms showing the way—and industries to arrive at more effective recourse mechanisms and create the necessary standards for the world to respond to what is clearly a very complex problem at the moment.      

As research on the deepfake phenomenon expands, even as specific events continue to emerge rapidly, the global economy and, specifically, the ICT industry is paying serious attention to arrive at the most optimal ways to mitigate these threats. even as they evaluate the opportunities they present.  

Several research studies indicate that deepfakes mainly pose damage to a company’s image, its reputation, and trustworthiness.  

But they may be equally dangerous for individuals and consumers, spawning, as they do, an era of blackmail, bullying, defamation, harassment, identity theft, intimidation, and revenge porn, impacting a huge array of human endeavours.  

Major Global Action So Far

China: In 2019, China introduced laws that mandate individuals and organizations to disclose when they have used deepfake technology in videos and other media. 

The regulations prohibit the distribution of deepfakes without a clear disclaimer that the content has been artificially generated.  

China recently established provisions for deepfake providers, in effect as of 10 January 2023, through the Cyberspace Administration of China (CAC).  

The contents of this law affect both providers and users of deepfake technology and establish procedures throughout the lifecycle of the technology from creation to distribution. 

European Union: The EU has called for increased research for deepfake detection and prevention, as well as regulation that would require clear labelling of artificially generated content.   

It has also proposed laws requiring social media companies to remove deepfakes and other disinformation from their platforms.  

The EU’s Code of Practice on Disinformation addresses deepfakes through fines of up to 6 per cent of global revenue for violators.  

The code, initially introduced as a voluntary self-regulatory instrument in 2018, is now backed by the Digital Services Act, which increases the monitoring of digital platforms for various kinds of misuse.  

Finally, under the proposed EU AI Act, which takes a risks-based approach to regulation, deepfake providers would be subject to transparency and disclosure requirements. 

South Korea: As a country globally recognized for its strong technology prowess, it is one of the first to invest in AI research and regulatory exploration.  

In 2016, South Korean announced an investment of about US$750 million in AI research over 5 years. In December 2019, the country announced its National Strategy for AI.  

In 2020, South Korea passed a law that makes it illegal to distribute deepfakes that could cause harm to public interest with offenders facing up to five years or imprisonment or fines up to approximately US$43,000.  

Additional measures focus on digital pornography and sex crimes through interventions such as education, civil remedies, and strong recourse mechanisms.

United States: The country has asked its Department of Homeland Security (DHS) to establish a task force to address digital content forgeries, also known as “deepfakes.”

Many states have enacted their own legislations to combat deepfakes, though there are still no federal regulations on deepfakes yet.  

The US’s direct legislation in the form of the DEEP FAKES Accountability Act is another proactive step in the right direction, making it illegal to create or distribute deepfakes without consent or proper labelling. 

India: So far, the Indian government has not come out with any constructed legislation on Deepfakes, as has been the case in other climes. But that it is completely seized by the severity of the problem is clear from a recent advisory. 

Amid growing concerns over deepfake videos, the government said in its latest advisory, “The content not permitted under the IT Rules, in particular those listed under Rule 3(1)(b) must be clearly communicated to the users in clear and precise language…”

The directive specifically targets the growing concerns around misinformation powered by AI – Deepfakes. 

The content not permitted under the IT Rules, in particular those listed under Rule 3(1)(b), must be clearly communicated to the users in clear and precise language including through its terms of service and user agreements," the ministry said. 

It added that "the same must be expressly informed to the user at the time of first registration and also as regular reminders, in particular, at every instance of login and while uploading/sharing information onto the platform."