Whatsapp 98103-86285 For Details

Important Editorial Summary for UPSC Exam

8 Dec
2023

Regulating deepfakes and AI in India (GS Paper 3, Science and Technology)

Regulating deepfakes and AI in India (GS Paper 3, Science and Technology)

Context:

  • Earlier, a video featuring actor Rashmika Mandanna went viral on social media, sparking a combination of shock and horror among netizens. The seconds-long clip, which featured Mandanna’s likeness, was manipulated using deepfake technology.
  • Deepfakes are digital media, video, audio, and images, edited and manipulated using Artificial Intelligence (AI). Since they incorporate hyper-realistic digital falsification, they can potentially be used to damage reputations and undermine trust in democratic institutions.
  • This phenomenon has forayed into political messaging as well, a concern in the run-up to the general elections next year.

 

Have deepfakes been used in politics?

  • Back in 2020, in the first-ever use of AI-generated deepfakes in political campaigns, a series of videos of Bharatiya Janata Party (BJP) leader Manoj Tiwari were circulated on multiple WhatsApp groups.
  • In a similar incident, a doctored video of Madhya Pradesh Congress chief Kamal Nath recently went viral, creating confusion over the future of the State government’s Laadli Behna Scheme.
  • Other countries are also grappling with the dangerous consequences of rapidly evolving AI technology. In May 2022, a deepfake of Ukrainian President Volodymyr Zelenskyy asking his countrymen to lay down their weapons went viral after cybercriminals hacked into a Ukrainian television channel.

 

How did deepfake tech emerge?

  • Deepfakes are made using technologies such as AI and machine learning, blurring the lines between fiction and reality.
  • Although they have benefits in education, film production, criminal forensics, and artistic expression, they can also be used to exploit people, sabotage elections and spread large-scale misinformation.
  • While editing tools, like Photoshop, have been in use for decades, the first-ever use of deepfake technology can reportedly be traced back to a Reddit user who in 2017 had used a publicly available AI-driven software to create pornographic content by imposing the faces of celebrities on to the bodies of ordinary people.
  • Now, deepfakes can easily be generated by semi-skilled and unskilled individuals by morphing audio-visual clips and images. As such technology becomes harder to detect, more resources are now accessible to equip individuals against their misuse.
  • For instance, the Massachusetts Institute of Technology (MIT) created a Detect Fakes website to help people identify deepfakes by focusing on small intricate details. The use of deepfakes to perpetrate online gendered violence has also been a rising concern.

 

What are the laws against the misuse of deepfakes?

  • India lacks specific laws to address deepfakes and AI-related crimes, but provisions under a plethora of legislations could offer both civil and criminal relief.
  • For instance, Section 66E of the Information Technology Act, 2000 (IT Act) is applicable in cases of deepfake crimes that involve the capture, publication, or transmission of a person’s images in mass media thereby violating their privacy.
  • Such an offence is punishable with up to three years of imprisonment or a fine of two lakh. Further, Sections 67, 67A, and 67B of the IT Act can be used to prosecute individuals for publishing or transmitting deepfakes that are obscene or contain sexually explicit acts.
  • The IT Rules, also prohibit hosting ‘any content that impersonates another person’ and require social media platforms to quickly take down ‘artificially morphed images’ of individuals when alerted.
  • In case they fail to take down such content, they risk losing the ‘safe harbour’ protection — a provision that protects social media companies from regulatory liability for third-party content shared by users on their platforms.

 

IPC Provisions:

  • Provisions of the Indian Penal Code (IPC) can also be resorted for cybercrimes associated with deepfakes; Sections 509 (words, gestures, or acts intended to insult the modesty of a woman), 499 (criminal defamation), and 153 (a) and (b) (spreading hate on communal lines) among others.
  • The Delhi Police Special Cell has reportedly registered an FIR against unknown persons by invoking Sections 465 (forgery) and 469 (forgery to harm the reputation of a party) in the Mandanna case.

 

What has been the Centre’s response?

  • The Union Minister of Electronics and Information Technology recently chaired a meeting with social media platforms, AI companies, and industry bodies where he acknowledged that “a new crisis is emerging due to deepfakes” and that “there is a very big section of society which does not have a parallel verification system” to tackle this issue.
  • He also announced that the government will introduce draft regulations, which will be open to public consultation, within the next 10 days to address the issue.

 

How have other countries fared?

  • In October 2023, U.S. President Joe Biden signed a far-reaching executive order on AI to manage its risks, ranging from national security to privacy.
  • Additionally, the DEEP FAKES Accountability Bill, 2023, recently introduced in Congress requires creators to label deepfakes on online platforms and to provide notifications of alterations to a video or other content. Failing to label such ‘malicious deepfakes’ would invite criminal sanction.
  • The European Union (EU) has strengthened its Code of Practice on Disinformation to ensure that social media giants like Google, Meta, and Twitter start flagging deepfake content or potentially face fines.
  • Further, under the proposed EU AI Act, deepfake providers would be subject to transparency and disclosure requirements.

 

What’s next?

  • AI governance in India cannot be restricted to just a law and reforms have to be centred around establishing standards of safety, increasing awareness, and institution building.
  • India’s regulatory response cannot be a replica of laws in other jurisdictions such as China, the US, or the EU.