Whatsapp 98103-86285 For Details

Important Editorial Summary for UPSC Exam

12 Dec
2023

EU historic deal, World first law on regulating AI propose (GS Paper 3, Science and Technology)

EU historic deal, World first law on regulating AI propose (GS Paper 3, Science and Technology)

Why in news?

  • European Union have reached a provisional deal on the world's first comprehensive laws to regulate the use of artificial intelligence.
  • After 36 hours of talks between the European Parliament and the EU member states, negotiators agreed rules around AI in systems like ChatGPT and facial recognition.
  • The European Union’s legislative framework assumes significance given that the US, the UK, and China are also jostling for the lead to set the template for AI regulations and publish their own set of guidelines.

 

The EU framework

  • The legislation includes safeguards on the use of AI within the EU, including clear guardrails on its adoption by law enforcement agencies, and consumers have been empowered to launch complaints against any perceived violations.
  • The deal includes strong restrictions on facial recognition technology, and on using AI to manipulate human behaviour, alongside provisions for tough penalties for companies breaking the rules.
  • Governments can only use real-time biometric surveillance in public areas only when there are serious threats involved, such as terrorist attacks.
  • The legislation was designed to be “much more than a rulebook” and that it’s proposed as “a launch pad for EU start-ups and researchers to lead the global AI race”.

 

Categorization:

  • In terms of details, the EU legal framework broadly divides AI applications into four risk classes: on one end, some applications will be largely banned, including the deployment of facial recognition on a mass-scale, with some exemptions for law enforcement. AI applications focused on behavioural control will be also banned.
  • High risk applications such as the use of AI tools for self-driving cars will be allowed, but subject to certification and an explicit provision for the backend techniques to be made open to public scrutiny.
  • Those applications that fall in the “medium risk” category can be deployed without restrictions, such as generative AI chatbots, but there has to be detailed documentation of how the tech works and users have to be explicitly made aware that they are dealing with an AI and not interacting with a human.
  • Developers will need to comply with transparency obligations before they release chatbots into the markets, including details about the contents used for training the algorithm.

 

Leadership on regulation:

  • Over the last decade, Europe has taken a decisive lead over the US on tech regulation, with overarching laws safeguarding online privacy, regulations to curb the dominance of the tech majors and new legislation to protect its citizens from harmful online content.
  • On AI, though, the US has made an attempt to take the lead by way of the new White House Executive Order on AI, which is being offered as an elaborate template that could work as a blueprint for every other country looking to regulate AI.
  • In October 2022, US released a blueprint for an AI Bill of Rights, seen as a building block for the subsequent executive order.
  • US’s move assumed significance, given that over the last quarter century, the US Congress has not managed to pass any major regulation to rein in Big Tech companies or safeguard internet consumers, with the exception of just two legislations: one on child privacy and the other on blocking trafficking content on the net.

 

GDPR of the EU:

  • In contrast, the EU has enforced the landmark GDPR (General Data Protection Regulation) since May 2018 that is clearly focused on privacy and requires individuals to give explicit consent before their data can be processed and is now a template being used by over 100 countries.
  • Then there are a pair of sub-legislations; the Digital Services Act (DSA) and the Digital Markets Act (DMA) that take off from the GDPR’s overarching focus on the individual’s right over her data.
  • While the DSA focused on issues such as regulating hate speech, counterfeit goods etc., the DMA has defined a new category of “dominant gatekeeper” platforms and is focused on non-competitive practices and the abuse of dominance by these players.

 

Different approaches:

  • These developments come as policymakers across jurisdictions have stepped up regulatory scrutiny of generative AI tools, prompted by ChatGPT’s explosive launch.
  • The concerns being flagged fall into three broad heads: privacy, system bias and violation of intellectual property rights.
  • The policy response has been different too, across jurisdictions, with the EU having taken a predictably tougher stance that segregates AI as per use case scenarios, based broadly on the degree of invasiveness and risk; the UK is seen to be on the other end of the spectrum, with a decidedly ‘light-touch’ approach that aims to foster innovation in this nascent field.
  • The US approach slots somewhere in between. China too has released its own set of measures to regulate AI.

 

India’s approach

  • India has pitched itself, especially to nations in the Global South, as a country that has effectively used technology to develop and deliver governance solutions, at a mass scale.
  • These solutions are at the heart of what India calls Digital Public Infrastructure (DPI) – where the underlying technology is sanctioned by the government and is later offered to private entities to develop various use cases. Now, India wants to take the same DPI approach with AI.
  • With sovereign AI and an AI computing infrastructure, India is hoping to focus on real-life applications of the tech in healthcare, agriculture, governance, language translation, etc., to catalyse economic development.

 

Way Forward:

  • The European Parliament will now vote on the proposed AI Act early next year, and a legislation is likely to come into force by 2025.