The EU AI Act, A landmark in AI regulation (GS Paper 3, Science and Technology)
Context:
- On 9 December 2023, the European Parliament and the Council of the European Union arrived at an agreement on the European Union Artificial Intelligence Act (EU AI Act).
- While the final draft is yet to be published, the broad contours have been set for what could prove to be a landmark in the history of AI regulation.
- With this, the EU has become one of the first AI regulators in the world.
Why the Act has been brought?
- In recent years, rapid advances in AI have raised questions about the preparedness of governments and regulatory agencies when it comes to safeguarding citizens’ rights and well-being.
- AI applications in fields like health and education, among others, have potentially far-reaching implications. On top of that, the AI industry in itself is a trillion-dollar business opportunity of which the governments want a share and unlike the internet, AI is not a product of government laboratories.
- The European Commission first came with a draft to regulate AI in the EU in April 2021. However, with the release of OpenAI’s ChatGPT in 2022 and in addition to other technological advances, Brussels felt the need to have a re-look at the original draft.
- Thus, after three days of intense negotiations, the three branches of the Union arrived at a compromise over the face of the bloc’s AI regulation for the foreseeable future, thus, aiming to ensure human oversight over AI.
Risk based approach:
- The Act follows a “risk-based” approach. As per this approach, AI systems have been classified into four categories based on the risk they pose:
- unacceptable risk (social scoring, biometric identification whether real-time or remote, biometric categorization which deduces personal preferences and beliefs as well as cognitive manipulation);
- high risk (AI systems used in domains like transport, education as well as those used in products coming under the EU’s product safety legislation);
- general purpose and generative AI (systems like OpenAI’s ChatGPT); and
- limited risk (like deepfakes).
What does the Act entail?
- Systems categorised as unacceptable risks will be banned while those termed high risk will undergo a compulsory fundamental rights impact assessment before being released into the market and will be labelled with a CE mark.
- General Purpose AI (GPAI) systems and the models on which they are based are required to follow transparency obligations. These include adhering to the EU copyright law, preparing technical documents, and releasing the summary of the training material for these GPAI systems.
- Also, more advanced GPAI systems will be subject to stricter regulations. No limitations on the use of limited risk systems are placed apart from the recommendation of using voluntary codes of conduct.
Exceptions:
- Use of unacceptable risks AI systems will be allowed only in the case of very serious crimes. However, it will be subject to judicial approval with a defined list of crimes.
- There will be certain areas where the act will not apply at all: military or defence; systems used only for research and innovation; and used by people for non-professional purposes.
Governance structure:
- It is expected that the act will be enforced by competent national agencies in each of the 27 member states.
- On the European level, the European AI Office will be tasked with the administration and enforcement of the act while there will also be a European AI Board in an advisory capacity and will be composed of member states’ representatives.
- To help small and medium enterprises (SMEs) grow, provisions for “Regulatory sandboxes” and “real-world testing” have been included.
- Citizens have also been given the right to seek redressal under the AI regime. They will be able to file complaints and “receive explanations about decisions based on high-risk AI systems that impact their rights.”
- Penalties will be imposed in case of violations of rules and will range from 7.5 million Euros to 35 million Euros (or as a percentage of turnover whichever is higher). However, smaller companies will be given a respite as their fines will be capped.
Pros:
- The risk-based approach that the act follows is an innovative way of dealing with the myriad challenges that AI is expected to pose.
- It also balances the needs of law enforcement along with citizens’ rights.
- The provision of a fundamental rights impact assessment keeps the citizens’ welfare at the forefront. Likewise, empowering citizens to seek redressal enables a powerful citizenry.
- The provisions which help the SMEs grow are also commendable.
Cons:
- The fear of over-regulation has been voiced by different quarters about some of the stringent provisions (like high fines) of the Act with observers opining that it might lead to stifling innovation.
- The Act envisions the setting up of a European AI Office and regulators in all the member states which could prove to be difficult as there is less room for budgetary manoeuvre at present.
- The original draft text of the Act still remains to be finalised. This opens another set of possibilities as this process can take as long as after the European parliamentary elections scheduled for June 2024.
- The Act might get derailed if member states are not satisfied with the final provisions of the Act and if they see them infringing on their own powers.
- Apart from that, the Act won’t completely come into force before 2026. Considering the speed of AI development, there is a possibility that the Act might be found wanting in certain areas.
What’s next?
- With the AI Act, the EU has taken the first steps towards using and developing AI in a responsible manner. This places it ahead of its peers like the United States and the United Kingdom when it comes to regulating AI.
- This legislation has the potential to become a benchmark in AI regulation as was the case with the GDPR for data privacy and protection.
- However, the EU needs to make sure that the Act in its final form, balances the protection of citizens’ rights with the need to give a push to innovation while at the same time, retaining sufficient flexibility to remain relevant with the speed of AI development.