Andy Van Pachtenbeke *
The classic 2002 movie Minority Report depicts a fictional future (2054 to be precise) with self-driving cars, facial recognition, targeted advertising and mass surveillance to prevent crime. While the movie is meant to be science fiction, twenty years later all of these technologies exist or are at least possible. The movie also paints a very bleak picture of what happens when technology is being used with a complete disregard for ethical standards and human rights. If we want to keep Minority Report in the realm of science fiction, ethical standards for Artificial Intelligence (AI) are indispensable.
During its recent General Conference (November 2021), UNESCO’s 193 Member States adopted a Recommendation on the Ethics of Artificial Intelligence, the first global instrument of its kind. With the input of eminent experts and after many long hours of intergovernmental negotiations, a text was approved which is distinctly Human Rights-based. The emphasis on human rights becomes clear from the preamble of the Recommendation, where the term is used no less than seven times, including when referring to the possible impact and opportunities created by Artificial Intelligence. Human Rights are also prominently mentioned amongst the main “Aims and Objectives” of the Recommendation: “to protect, promote and respect human rights and fundamental freedoms, human dignity and equality, including gender equality;” (II, 8, c). While these general mentions are of course of great, symbolic importance, the true measure of the Human Rights-Based Approach (HRBA) is in its implementation throughout the text. However, before going into more detail, it may be useful to examine the scope of the Recommendation.
Scope of the Recommendation
Artificial Intelligence encompasses a wide range of technologies, both existing and yet to be developed (or currently being developed). The drafters of the Recommendation have purposefully declined to define the concept in the text. While this provides for less legal certainty, it does make the Recommendation comprehensive and future-proof. At this point, even the most renowned experts would be hard-pressed to predict what technology and Artificial Intelligence will look like in ten or twenty years, let alone further in the future. Any precise definition might therefore become obsolete before long. In fact, dealing with a society that evolves faster than legal instruments can be developed, may be one of the biggest contemporary and future challenges of international law. More on that further in this text.
The lack of a proper definition does not mean that the Recommendation does not offer any guidance as to its scope. Firstly, the Recommendation “addresses ethical issues related to the domain of Artificial Intelligence to the extent that they are within UNESCO’s mandate” (I.1). Since UNESCO’s mandate is very large and includes the entire realms of culture, education, natural sciences, social and human sciences, and communication and information, the Recommendation has a broad application, permeating many if not most aspects of society (including medical science, freedom of expression, disinformation, educational practices, academia, …). Secondly, even if there is no exact definition of AI, the text does offer some defining elements of AI systems: “systems which have the capacity to process data and information in a way that resembles intelligent behaviour, and typically includes aspects of reasoning, learning, perception, prediction, planning or control” (I.2). Finally, the Recommendation applies to the entire life cycle of AI systems, from research to development and use. During all stages, the ethical principles should be respected.
Human Rights-Based Approach
The technical complexity of the topic is somewhat reflected in the structure of the Recommendation. While distinguishing between values, principles (which are a more concrete translation of the values), and policy areas, the text actually formulates ethical principles to some extent in all these sections. The HRBA runs like a thread through the entire text. The scope of this article does not allow for a comprehensive study of the entire Recommendation in all its aspects, but I do want to highlight some of the relevant values, principles and policy areas.
Values
The very first value mentioned is “Respect, protection and promotion of human rights and fundamental freedoms and human dignity” (III.1.13). This part of the text also refers to the worth of every human being “regardless of race, colour, descent, gender, age, language, religion, political opinion, national origin, ethnic origin, social origin, economic or social condition of birth, or disability and any other grounds”. While no consensus could be reached on its explicit inclusion in this list, the addition of “any other grounds” provides a catch-all that is generally understood to include sexual orientation. This solution may not be grammatically pleasing, but it does make the text more inclusive. For those who prefer the language of Molière, it must be said that the French text “ou de tout autre motif” is rather more elegant. The respect and protection of human rights are described as including both the absence of harm and the enhancement of the quality of life.
“Ensuring diversity and inclusiveness” (III.1.19) is mentioned as a separate value as well. Whereas the first value approaches the individual more as a passive subject whose rights need to be protected, this second one is aimed at the active participation of all individuals or groups, again, regardless of the same criteria mentioned in the list above.
Another value is “(l)iving in peaceful, just and interconnected societies” (III.1.22). Of course, nobody can be opposed to peace and interconnectedness. However, we must always make sure that this does not come at the expense of individual rights, which is why the text refers to an “interconnected future for the benefit of all, consistent with human rights and fundamental freedoms.” Also worth noting is that this provision does not only refer to the interconnectedness between human beings, but also between humans and their natural environment. It would lead us too far to elaborate on this, but next to human rights, the care for the environment is a recurring theme throughout the Recommendation.
Principles
Many of the principles included in the text more or less repeat what has already been stated under the “Aims and objectives” and “Values” sections of the Recommendation. The idea is to offer more guidance as to the specific implementation of those values.
“Proportionality and Do No Harm” (III.2.25). By referring to proportionality, legitimate aims or objectives, and appropriateness to the context, this principle basically incorporates the usual test for the protection of human rights. More importantly however, the text addresses some very specific situations: in cases of irreversible or life and death decisions, humans and not AI must have the final say. Furthermore, “AI systems should not be used for social scoring or mass surveillance purposes”. It is not very common for legal instruments to target situations with such specificity, and it is clear that this is inspired by existing situations.
“Fairness and non-discrimination” (III.2.28). This principle is a translation of the value of inclusiveness which was mentioned higher. Other than a repetition of the already mentioned criteria which should not be used in an exclusionary way, this principle introduces a positive approach to inclusiveness, by listing the “specific needs of different age groups, cultural systems, different language groups, persons with disabilities, girls and women, and disadvantaged, marginalized and vulnerable people or people in vulnerable situations”. The text further highlights the importance of local communities, cultural diversity and the rural-urban divide.
“Right to Privacy, and Data Protection” (III.2.32). When thinking about AI, the right to privacy is probably one of the first aspects that comes to mind. This principle states that “it is important that data for AI systems be collected, used, shared, archived and deleted in ways that are consistent with international law and in line with the values and principles set forth in this Recommendation, while respecting relevant national, regional and international legal frameworks.” While informed consent is mentioned in this context, it is figured far less prominently than e.g. in the European Union’s General Data Protection Regulation. However, the requirement of consistency with international law implies that informed consent should be one of the essential elements in processing personal data.
“Transparency and explainability” (III.2.37). This principle recognizes that transparency is necessary for the effective protection of human rights. Without transparency, there is little chance of a fair trial or an effective remedy against certain decisions, nor of informed decisions regarding the exercise of one’s rights. According to the Recommendation, transparency includes being informed of the fact that a decision was taken on the basis of AI, accessing the reasons for such decisions, and the possibility of a human review. At the same time, transparency is not absolute, and can be limited for reasons of privacy, safety and security. The above-mentioned principle of proportionality once again comes into play here.
“Responsibility and accountability” (III.2.42). When speaking about human rights protection, the need for accountability is a given. However, in case a decision has been taken on the basis of AI algorithms, this might be somewhat more complex. The Recommendation states that “ethical responsibility and liability for the decisions and actions based in any way on an AI system should always ultimately be attributable to AI actors corresponding to their role in the life cycle of the AI system”. While the text calls for tools such as oversight, traceability and whistle-blowers’ protection in this regard, the legal determination of responsibility and liability might prove a particularly difficult exercise in practice. It may require a lot of technical expertise and a breakdown of an entire AI process to determine during which stage and as a consequence of which human determination, a person’s rights have been affected.
Areas of policy action
The Recommendation itself suggests some policy areas in which States can apply and implement the values and principles. Whereas the values and principles form the universal basis of the Recommendation, these policy areas may be considered a bit more flexible, recognizing that the readiness to implement might differ between States. Most of these policy areas involve creating legal and policy frameworks and are of a rather technical nature. A few deserve mentioning in the context of the HRBA.
Ethical Impact Assessments (IV.50). The Recommendation calls upon States to develop frameworks for Ethical Impact Assessments when dealing with AI systems. Such assessments should notably examine the impact on human rights and fundamental freedoms. This includes broad testing before rolling out AI systems.
Data policy (IV.71). In line with what has been stated on the right to privacy, States should develop sufficient safeguards to protect this right. One of the provisions is particularly interesting when it comes to handling personal data: “Member States should ensure that individuals retain rights over their personal data and are protected by a framework, which notably foresees: transparency; appropriate safeguards for the processing of sensitive data; an appropriate level of data protection; effective and meaningful accountability schemes and mechanisms; the full enjoyment of the data subjects’ rights and the ability to access and erase their personal data in AI systems, except for certain circumstances in compliance with international law (…)”. When discussing the principle, I wrote that the Recommendation did not go quite as far as the EU’s GDPR. This provision however at the very least comes close to the spirit of that GDPR.
Gender (IV.87). In the section on values, the Recommendation mentions that human rights and inclusiveness must be ensured regardless of gender. This policy area goes a bit further, emphasizing the rights of girls and women, and the need to ensure that AI and digital technologies actually contribute to achieving gender equality. Among the specific actions that should be taken, the text mentions avoiding gender bias in AI systems, increasing the opportunities of girls and women in science, technology, engineering and mathematics (STEM), and avoiding online harassment.
Communication and information (IV.112). Within UNESCO, Communication and Information is the sector that notably deals with freedom of expression. This focus comes back in the text of this policy area. The Recommendation presses States to ensure that AI actors respect and promote freedom of expression and access to information. It also mentions media and information literacy as essential tools in the fight against disinformation, misinformation and hate speech. This last aspect is one of the critical challenges of our time, which can largely be exacerbated by the use of AI.
Private sector
A lot has been written about the role of private actors in international law, and in human rights law in particular. This article does not intend to contribute to the principled debate on this issue. However, from a very practical point of view, it is clear that private actors play a crucial role in all phases of the development and use of AI systems. In some sectors (ICT, social media, etc.) we are faced with huge multinational companies that dominate the field and that are very hard to regulate through national and other legislation.
Like all international instruments that have been negotiated through an intergovernmental process, this Recommendation is primarily aimed at State actors. Notwithstanding, the role of the private sector has been emphasized throughout the text and several provisions are aimed at including them in the implementation of the values and principles.
This is partly achieved in the most traditional way, by pressing States to develop regulatory frameworks based on the values and principles, which would be applied to private actors. However, the Recommendation goes further. Already in the section on the Scope of Application, it states that it envisages to provide ethical guidance to AI actors, including the public and private sectors. In other words, even if the Recommendation is an intergovernmental instrument, the hope exists that private actors will be inspired by its provisions in their own work with AI systems. The Recommendation also mentions the importance of shared responsibility and a multistakeholder approach. When addressing the specific values and principles, the text effectively refers to these as a responsibility of all AI actors, e.g. “Governments, private sector, civil society, international organizations, technical communities and academia must respect human rights instruments and frameworks in their interventions in the processes surrounding the life cycle of AI systems” (III.1.16) or “AI actors and Member States should respect, protect and promote human rights and fundamental freedoms” (III.2.42). The most direct appeal comes in the section on the utilization of the Recommendation: “Member States and all other stakeholders as identified in this Recommendation should respect, promote and protect the ethical values, principles and standards regarding AI that are identified in this Recommendation, and should take all feasible steps to give effect to its policy recommendations” (VI.135). Whether or not private actors will effectively take up this Recommendation may depend on the efforts to promote it by the various stakeholders, including States, civil society and UNESCO itself.
A way to future-proof human rights?
The Recommendation is an instrument of soft law. Its provisions are not binding and its effectiveness depends in many ways on the goodwill of States (and other actors). While this is an obvious weakness when it comes to human rights protection, this approach also offers significant benefits.
First and foremost, developing a binding convention with 193 Member States would have been a gruesome, time-consuming process and would almost certainly have led to a text with a much weaker focus on human rights. The legal nature of this instrument is what has allowed the very prominent Human Rights-based Approach.
Secondly, legally binding obligations require precise and directly applicable definitions, and a clearly demarcated scope. As stated higher, it is currently impossible to agree on a fully comprehensive, future-proof definition of Artificial Intelligence. The decision-making process that leads to a convention is also too heavy and not flexible enough to adapt to new evolutions and constantly changing circumstances. In a society where technology evolves faster than the pioneers of international law could have ever imagined, treaties are simply not the right instrument to respond to those evolutions. The direct appeal to private actors would not have been possible either through a binding convention. In the current international law toolbox, soft law instruments seem to provide the best solution to these challenges.
One might also argue that we do not necessarily need new binding provisions, at least when it comes to human rights. The existing treaties and conventions provide a wide range of human rights and fundamental freedoms, which are as relevant now as they were at the time of their adoption. The key is to interpret and apply them taking into account evolving societies. This is where soft law instruments have an important, additional role to play, as a means of interpretation of existing human rights obligations. This is how soft law has repeatedly been applied by international human rights courts, so the idea is definitely not new. However, faced with the complexities and acceleration of societal changes, this method may become ever more important as a practical way to future-proof human rights.
Conclusion
The Recommendation is groundbreaking because it is the first global legal instrument on the ethics of AI, but also because of its very strong focus on human rights and its recognition of the importance of private actors. Moreover, this marks an era where other than States and traditional private actors, we must take into account a new, intelligent, somewhat autonomous entity, namely technology itself. If anything, this Recommendation puts forward one essential principle, that of “Human Beings over AI”.
Of course, the Recommendation is not perfect. Its implementation is not guaranteed and without a doubt some new challenges will emerge that even the broad text of this instrument has not been able to foresee. Such is the eternal nature of law itself. Who is to say whether the singularity, the point where humankind loses control over technology, can be avoided altogether? Nevertheless, for those who want to apply an ethical, Human Rights-based Approach to Artificial Intelligence, this Recommendation provides a very solid basis. The future may not be utopian, but if these values and principles are respected, at least it will not be the dystopia that science fiction likes to depict either. If Tom Cruise and his friends in Minority Report had implemented the Recommendation on the Ethics of AI, the movie would have been distinctly more boring, but their society would have been all the better for it. It would be wise to keep the “fiction” in “science fiction”.
The full text of the Recommendation on the Ethics of Artificial Intelligence can be found here
* Andy Van Pachtenbeke is a jurist, specialized in international law and human rights law. Previously he worked as an attorney and a human rights researcher. He now works as a diplomat, currently posted to UNESCO. All views expressed in this article are strictly his own.
The Human Rights in Context Blog is a platform which provides an academic space for discussion for those interested in human rights, democracy, and the rule of law. We are always interested in well-written and thoughtful comment and analysis on topical events or developments. Scholars from all disciplines, students, researchers, international and national civil servants, legislators and politicians, legal practitioners and judges are welcome to participate in the discussions. We warmly invite those interested in writing a post to send us an e-mail explaining briefly the relevance of the topic and your background as an expert. We will get back to you as quick we can. All contributors post in their individual capacity, and their opinions do not necessarily reflect the official position of Human Rights in Context, or any organisation with which the author is affiliated.
Comments