The recently-enacted European Union Artificial Intelligence Act is a forerunner even as nations struggle to find laws to curb or restrict the misuse and negative impact of artificial intelligence. The Act is a welcome step towards a global AI regulation regime
By Ashit Srivastava
The legal diaspora has been at the forefront in matters of regulation of Artificial Intelligence (AI), bringing ethical regulations for the usage of AI. No other discipline has been more active than the predicament of law to curb or at least restrict the possible negative impact of AI. In fact, the modern generation is predominantly occupied with the question of liability cases of AI, ever since the onslaught of automation, especially self-driven automated cars. Running parallel has been the struggle in attempting to keep pace against this prolific jump made by technology in the last decade or so. In this mix has come the novel European legislation of an AI Act, being the first to do so legislatively.
Europe has a history of tackling technology and its possible repercussions. The 19th century industrial revolution that introduced ideas of capitalism in Europe, or the mid-1990’s when they enacted the Data Protection Directive, the European continent has brought modern laws to deal with modern legal questions. The European Parliament passed the EU Artificial Intelligence Act by a full majority. Under the Act, AI developers, manufacturers or distributors may face a penalty up to euro 35 million. It is a tremendous step, knowing that the rest of the world has been attempting to put a check on the growth of AI mechanisms. Interestingly, instead of having an umbrella regulation for AI in the European continent, the enacted law attempts to regulate the AI in a classified manner, by bifurcation between “Unacceptable AI”, “High-Risk AI” and “AI with limited or minimal risk”. In the case of “Unacceptable AI”, there is a complete prohibition on the usage of the AI that can be used for targeting vulnerable groups, usage of manipulative techniques, capable of violating the fundamental rights of citizens and European value systems. To elaborate:
- Unacceptable AI: Usage of AI systems using subliminal technique beyond a person’s consciousness to materially distort a person’s behaviour in a manner that will cause him/her physical or psychological harm. Usage of subliminal techniques refers to exploiting the human fallacies that are not within the control of the human consciousness stage and the individual will not be able to control such behaviour if stimulations are given to it. For example, human beings are heuristic, that means they make prompt actions on less information, or the affirmation bias, a psychological bias in which an individual ends up affirming to things in a particular setting or the ranking bias, that is mostly applied in search engine results, we tend to choose the first option in the search engine results.
- Putting AI into any market service that exploits any of the vulnerabilities of a specific group due to their age, physical or mental disability, to materially distort the behaviour that will likely to cause physical or psychological harm.
- AI that provides for social scoring of an individual for evaluating the trustworthiness of an individual.
- Running of a real time biometric identification systems in public access spaces.
Article 6 and 7 of the AI Act provides for high-risk based AI. It includes:
“(a) the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II;
(b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk.”
Annex II of the enactment provides for List of Union Harmonization Legislation that needs to be reconciled with the EU AI Act. Additionally, Annex III provides for a list of systems that should be regarded as high-risk AI, such as:
- AI systems intended to be used for the “real-time” and “post” remote biometric identification of natural persons.
- AI systems intended to be used as a safety component in management and operation of road traffic and supply of water, gas, heating or electricity.
- AI system that will be used for managing the access to natural person educational or vocational training institutes. AI systems used for assigning students to educational or vocational training institutes and assessing participants in tests commonly required for admission in educational institutions.
- AI use for appointment in employment or promotion, or screening or filtering applications or evaluating the eligibility of candidates.
- Usage of AI for the purpose of accessing the eligibility of natural person for the purpose of essential private or public services.
- AI system used for the purpose of law enforcement purpose, in case of determining the possibility of recidivism of a natural person.
- AI system used for the asylum, migration and border control management.
- AI system used for the purpose of assisting the judicial system in researching and interpreting the law and facts.
Article 52 of the AI Act provides for the Limited Risk AI system. It includes “Transparency obligations for certain AI systems”.
- AI interacting with a human being should be designed in such a manner that the person is aware that it is dealing with AI.
- Emotion recognition systems shall inform the natural person to the system they are exposed to.
- AI that leads to creation of real life like images, audio (“deep fakes”) should be informed that the content has been artificially created or manipulated.
Apart from the risk-based approach, there is another minimal-risk-based approach for AI that mostly covers AI-enabled video games that may have influence on the fundamental rights.
A deeper analysis exposes as to how the EU AI Act seems like an appropriate approach, understanding the different layers at which the AI impacts the life of an individual. Quite understandably, not all AI needs to be treated the same, the gravity and degree of influence and manipulative capacity and capacity to harm—physically or psychologically—is quite different. Additionally, the enactment has appropriately addressed the unacceptable element of the AI, knowing that for a decade or so, there are several simulations-based AI that have categorically targeted the unconscious human fallacies, some of them mentioned earlier, such as the ranking bias, the heuristic fallings or the affirmation bias—these are just the tip of the iceberg.
The second layer of AI that has been appropriately put in the high-risk system are continuously being used in the judiciary, employment sector and law-enforcement. AI tools for determining recidivism factors have become common in most of the European countries. However, America is the one that has taken a lead in this direction through COMPAS (an AI tool for predicting the recidivism capacity of an individual). Similarly, the employment sector has been deploying the usage of AI, right from shortlisting of individuals for interviews to the selection of the eligible candidate.
If AI is being used at such a rapid speed, the possibility of discrimination cannot be fully ruled out, especially knowing that in cases of AI, the data upon which the algorithm is trained are mostly biased, thus resulting in more bias. A special mention needs to be made of the current artificial chaos created by deep fakes. Not only Europe, but the whole of South-Asia has also been under a constant attack of deep fakes.
The EU enactment has addressed the question by demanding more transparency for individuals that are being exposed to such content. Additionally, this could also be looked at from the perspective that with deep fakes becoming a part of the techno-socio aspect of civil life, what is required is a dividing line between what is real and what is artificial.
Though the EU Act is a welcome step towards a global AI regulation regime, there will be few critiques as well, mostly based on the theme that the Act may tend to overregulate. What sort of companies or entities will be in a position to bear the compliance burden will surely work as a disincentive for many developers and manufacturers of AI mechanism.
There is no doubt there has to be an ethical limit to the development of AI tools, but not to an extent that might stifle technological development. The EU Act has been unable to address this complex question appropriately. The technology jurisprudence is based on balancing the interests of the user, the interest of the government and private players and puts up the much-needed guardrails even as AI platforms and applications grow at alarming speed.
—The writer is Assistant Professor of Law at Dharmashastra National Law University, Jabalpur