The Politics of AI Ethics

AI ethics is a much-used buzzword in the discourse around the challenges and opportunities of artificial intelligence. Yet, there is a problem with AI ethics that only becomes visible when taking a closer look at the debate: AI ethics is not just an ethics but a politics.

In recent years, there has been a discursive explosion on the term “AI ethics” in academic scholarship (see, for instance, Wagner, 2018; Jobin, Ienca and Vayena, 2019; Bietti, 2020; Rességuier and Rodrigues, 2020). These discussions emerged as it became increasingly obvious that the ongoing implementation of artificial intelligence technologies – such as face recognition or algorithmic prediction – in various parts of our everyday lives needed to be considered through the lens of ethics. As Rességuier and Rodrigues put it: “It is now well recognized that things could go really wrong if AI is implemented without due regard and consideration for its potentially harmful impacts on individuals, on specific communities and on society as a whole” (2020, p.1). This seems particularly urgent in a contemporary moment when algorithms and artificial intelligence are used not only in consumer culture, predicting, for instance, what you should next watch on Netflix, but also in the context of military defence, policing and incarceration (Ochigame, 2019).

Broadly speaking, the purpose of AI ethics is “to ensure that AI is deployed in a manner that respects dearly held societal values and norms” (Rességuier and Rodrigues, 2020, p.2). AI ethics often takes the shape of a set of codes, guidelines or principals according to which AI should be employed. Such principles frequently include the notions of “transparency”, “fairness” or “responsibility” (see Jobin, Ienca and Vayena, 2019; Ochigame, 2019). Thus, at first glance, AI ethics seem to be a good idea. Indeed, many would agree that there is a wide-ranging acceptance across a variety of stakeholders with an interest in AI – ranging from states to businesses and from academics to civil society organisations – that AI ethics should exist. Yet, there is a fundamental problem with the term “AI ethics” that only becomes visible once we take a closer look at the debate.

In a recent article for The Intercept (2019), Rodrigo Ochigame explains what can go wrong with AI ethics. Ochigame reflects, here, on his time as a graduate student researcher working with Joichi Ito, who was then the chair of the MIT Media Lab’s ethics group. Based on his experiences in that position, Ochigame argues “that the discourse of “ethical AI,” championed substantially by Ito, was aligned strategically with a Silicon Valley effort seeking to avoid legally enforceable restrictions of controversial technologies.” (2019, online) In other words, Ochigame felt that “MIT lent credibility to the idea that big tech could police its own use of artificial intelligence at a time when the industry faced increasing criticism and calls for legal regulation”. The wider problem with AI ethics, then, is that a close alliance of academia and business can end up increasing the power and profit-making possibilities of big tech at the cost of a critical inquiry. What Ochigame’s fascinating and worrying story ultimately tells us, is that we have to ask not only what AI ethics is but critically question who it is made by and what it is made for.

Once we pay closer attention to the different stakeholders of AI ethics, we start to realise that while using the same buzzwords, there are vastly different – indeed conflicting – understandings of what AI ethics actually means, as well as why and how they might be implemented, as Jobin et al. (2019) have shown. Businesses, for instance, might be open to – and indeed a driving force of – AI ethics. But, as Ochigame points out quoting computer science professor Cynthia Dwork, it is ultimately “economically advantageous to provide a service that is “free of regulatory problems”” (2019, online, emphasis added). As Rességuier and Rodrigues (2020) argue, it is not AI ethics as such that is the problem. To say so would mean to engage in what Bietti (2020) has called “ethics bashing”, that is to trivialise and underestimate ethics as a potentially powerful tool. Rather, problems arise as the term “AI ethics” is misused in a way that prioritises the profits made by businesses over the human rights of the people being profiled. Scholars have termed such practices “whitewashing” (Ochigame, 2019), “ethics washing” (Bietti, 2020) or “ethics-shopping” (Wagner, 2018).

Thus, if we actually want to understand the discourse and meaning of AI ethics, we must regard AI ethics as a conflict of interests. In other words, more than a mere legal guideline or set of principles, AI ethics must be understood as an ongoing negotiation (Rességuier and Rodrigues, 2020), or, put differently, as a politics. As the organisation Algorithm Watch, which compiled a global inventory of AI ethics guidelines, states: “The number and diversity of actors and their different goals make it all the more necessary to clearly define the terms used” (Algorithm Watch, 2020, online), so that AI ethics is not misused as a business weapon “in support of deregulation, self-regulation or hands-off governance” (Bietti, 2020, p.2010). Grassroots campaigns such as #TechWontBuildIt, #NoTechForICE or #Data4BlackLives are already trying to show the real-life effect that the reproduction of already existing inequalities through AI may have. It is for these reasons that we are interested, in the Human Error Project, in actors who are challenging empty promises and false ideas of AI ethics in order to find out what a genuinely ethical AI may look like.

Figure 1: A recent report by Algorithm Watch shows that most AI ethics guidelines are developed in the U.S.; Source: Algorithm Watch, 2020

 

By Antje Scharenberg