AI Deepfake: The Indian Approach

As it stands, the Deepfake case of Rashmika Mandana had shaken the Indian government. But more than that, the impending make-or-break trails at the hustings this year, the authorities are even more cautious. And though there is no definitive legislation in place so far, the ‘advisories’ are more than nerve-racking for Indian deepfakers.

author-image
Sujit Chakraborty
New Update

AI DEEPFAKES / CAMPAIGN SERIES / INDIA / REGULATIONS

India: So far, the Indian government has not come out with any constructed legislation on Deepfakes, as has been the case in other climes. But that it is completely seized by the severity of the problem is clear from a recent advisory.

Amid growing concerns over deepfake videos, the government said in its latest advisory, “The content not permitted under the IT Rules, in particular those listed under Rule 3(1)(b) must be clearly communicated to the users in clear and precise language…”

The directive specifically targets the growing concerns around misinformation powered by AI – Deepfakes.

The content not permitted under the IT Rules, in particular those listed under Rule 3(1)(b), must be clearly communicated to the users in clear and precise language including through its terms of service and user agreements," the ministry said.  

It added that "the same must be expressly informed to the user at the time of first-registration and also as regular reminders, in particular, at every instance of login and while uploading/sharing information onto the platform." 

Section 66E of the Information Technology Act, 2000 (IT Act): This is applicable in cases of deepfake crimes that involve the capture, publication, or transmission of a person’s images in mass media thereby violating their privacy.  

Such an offence is punishable with up to three years of imprisonment or a fine of ₹2 lakh.  

Similarly, Section 66D of the IT Act punishes individuals who use communication devices or computer resources with malicious intent, leading to impersonation or cheating. An offence under this provision carries a penalty of up to three years imprisonment and/or a fine of ₹1 lakh. 

Sections 67, 67A, and 67B of the IT Act: These can be used to prosecute individuals for publishing or transmitting deepfakes that are obscene or contain any sexually explicit acts. 

The IT Rules, also prohibit hosting ‘any content that impersonates another person’ and require social media platforms to quickly take down ‘artificially morphed images’ of individuals when alerted.  

In case they fail to take down such content, they risk losing the ‘safe harbour’ protection — a provision that protects social media companies from regulatory liability for third-party content shared by users on their platforms. 

Indian Penal Code, 1860, (IPC): Provisions here can be used for cybercrimes associated with deepfakes: sections 509 (words, gestures, or acts intended to insult the modesty of a woman), 499 (criminal defamation), and 153 – a and b – (spreading hate on communal lines), among others. 

The Copyright Act of 1957: This can be used if any copyrighted image or video has been used to create deepfakes. Section 51 prohibits the unauthorised use of any property belonging to another person and on which the latter enjoys an exclusive right. 

GOI ‑ Current Position

Following the outrage over the Mandanna deepfake video, Union Minister of Electronics and Information Technology Ashwini Vaishnaw chaired a meeting November 23 with social media platforms, AI companies, and industry bodies—close on the heels of Prime Minister Narendra Modi’s acknowledgment that “a new crisis is emerging due to deepfakes” and that “there is a very big section of society which does not have a parallel verification system” to tackle this issue .  

The minister also announced the introduction of draft rules, which would be open to public consultation, while committing to address the issue within ten days. 

The rules would impose accountability on both creators as well as social media intermediaries, adding all social media companies had agreed that it was necessary to label and watermark deepfakes. 

Meanwhile, the Minister of State for Electronics and Information Technology (MeitY), Rajeev Chandrasekhar has held the view that India’s existing legal framework was adequate to deal with deepfakes, but needed very strict enforcement.  

He said that a special officer (Rule 7 officer) would be appointed to closely monitor any violations, while an online platform would be set up to assist aggrieved users and citizens in filing FIRs for deepfake crimes. 

Lighthouse: Industry, Big Tech

Soon after the government moved to rein in rising incidents of deepfake deception, warning social media platforms to comply, YouTube India reaffirmed its commitment to talking deepfakes and emphasising that “keeping such content was not in its interest”. 

It said it was complying with Indian laws in close cooperation with the government.  YouTube creators will soon have to mandatorily disclose any use of Generative AI in their content and inform the viewers accordingly with added labelling in the description box and the video player as well. 

Internet giant Google has also said it is teaming up with the Government of India to tackle the problem, especially risks like misinformation online campaigns such as synthetic media, placing the lens on initiatives it is taking to combat and flag AI-generated content via a mix of machine learning and human reviewers, while its recently updated election advertising policies now require publishers to state if their ads include digitally altered or generated content, with the intent to deceive, mislead or defraud users.  

There is broad global consensus, especially in the technology industry, that there is a need for more proactive action to counter Deepfakes. The success of much of these actions will depend on how global industry and big tech companies lead the way to create a comprehensive and proactive approach that includes technological, educational and legislative strategies.  

A legal and ethical framework surrounding deepfake technology is needed. Governments and international organizations embrace a multistakeholder approach and collaborate to establish regulations and standards that protect against the misuse of deepfakes.  

The key here really is to ensure that the transformational potential of AI benefits society while lowering risks through responsible AI innovation.

Call to action: Ensuring gains truly outweigh risks 

Given the speed at which social media operates, a deepfake on social media can rapidly reach millions of people and result in a wide range of implications in range of areas—from marketplaces to entertainment that primarily includes the movie business, space, defence, education, individual companies and businesses, and their consumers, and, most crucially, as a tool to create misinformation in electoral politics in democracies.

While our knowledge of deepfakes is still limited and, therefore, still varied, growing research and evidence suggests their growing significance, especially the threats and deceptions they pose, for businesses, citizens, and politics today. 

The Googled Example

But there is another argument that points towards a real possibility of deepfakes distracting us from what has always been a challenge—threats to the flow of genuine information in society—an issue around for decades before deepfakes arrived.  

While there is indeed no question today about bad actors using deepfakes to spread misinformation and deceive people, there is slow but growing recognition that such phenomena, in the long term, might create just the kind of social and technological disruption required to foster greater online trust.  

Google’s Gemini is a fine example of such advances currently underway.  

Google’s latest offering, the Gemini AI is being considered a significant breakthrough in AI modelling, which the company claims has advanced "reasoning capabilities" to "think more carefully" when answering hard questions.  

In their article “Google claims new Gemini AI 'thinks more carefully” , Shiona McCallum and Zoe Kleinman from the BBC’s Technology team, write: “AI content generators are known to sometimes invent things, which developers call hallucinations. Gemini was tested on its problem-solving and knowledge in 57 subject areas including maths and humanities”.   

Google’s Gemini AI points towards the deep research and development currently underway within the global technology industry, underscoring the recognition that even as deepfakes evolve and become more sophisticated and dangerous, the benefits to be accrued from AI technology can potentially far outweigh the existing and imaginary risks populating the metaverse and our increasingly algorithmic global economy.   

The rapid rate at which technology R&D gains pace, bad actors will need to contend with far more sophisticated, robust and, in the final analysis, much more effective detection and mitigation algorithms and technology systems.  

The good news is that these capabilities are rapidly improving amid urgent global multilateral collaborations in technology and policy making, an increasingly proactive big tech push back against bad actors by major players like Google, YouTube, and Facebook, among others, to foster safety, trust and online harmony, and political dispensations and governments around the world taking this very seriously.    

In an Indian context, there is a clear opportunity for the country to lead this nascent but sure global pushback against deepfakes through its obvious focus on ethical and equitable innovation aimed at ushering transformative changes to the lives of its citizens and for people around the world who need it the most.         

(This is the conclusive article of The Processor’s AI Deepfake campaign)