Algorithmic Bias cannot be Fixed

The question about algorithmic bias is exploding. In 2018 Amazon scraped an AI recruiting tool because it had been trained mostly with male CV and taught itself bias against women. In 2019, Science published a research that showed that health care risk-prediction algorithm, which us used by more than 200 million people in the US was racially biased. In 2020, Google AI Ethics’ Timnit Gebru has been fired over a paper which exposed the bias intrinsic of large language models.

What is increasingly becoming evident is that algorithms are designed by human beings and are the product of specific cultural contexts and values, and therefore are inevitably biased.

This finding is of course not new. In 1996, Friedman and Nissenbaum (1996) identified three types of bias in computer systems: pre-existing bias (the bias of the humans that design computer systems and the bias produced by the cultural context that influences the design) technical bias (often there is a lack of resources in the development of computer systems, and engineers work with technical limitations) emergent bias (society is always changing and thus the technologies designed at one given time or cultural context might become biased in a different time and context) (Friedman and Nissenbaum, 1996).

Although the understanding of bias in computer systems has a long history, the question about algorithmic bias has come to the fore especially in the last few years. In 2014, the Obama Administration launched an enquiry on the impact of Big Data, which revealed that automated systems, although unintendedly, are biased and thus can reproduce existing forms of discrimination (Podesta et al, 2014).  By 2016 the issue of algorithmic bias exploded. The American mathematician Cathy O’Neil wrote a book titled Weapons of Math Destruction in which she argued that that algorithmic models are biased and lead to data-driven decisions that reinforce racism and harm the poor.

In the same year Barocas and Selbst (2016) published an article calling the public, researchers, and policy makers to understand the disparate impact of big data on different sections of society.  In 2018, two books came out, which were both crucial in framing these debates about algorithms, discrimination and social justice: Nobles’ book titled Algorithms of Oppression and Eubanks’ book titled Automating Inequality. Noble’s (2018) book focused on the gendered and racial bias of the Google search algorithm and what it means to live in the society where ‘biased Google searches’ define our knowledge of the world. Eubanks’ (2018) ethnographic work on poor communities and their exposure to systemic automated inequality leaves the reader daunted in front of the real life harms, and inescapability of systematic, automated bias.

What all these debates about algorithmic bias suggest is that AI systems are human made, and will always be shaped by the cultural values and beliefs of the humans and societies that created them. The problem of algorithmic bias has become a key issue not only for researchers working on data but also for the industry. More and more tech businesses and AI developers are trying to find solutions to fight algorithmic bias in their products and technologies. It is for this reason that they are funding research and establishing advisory boards that are meant to scrutinize the ethical and political impacts of their technologies (e.g. AI Ethics). At the heart of these strategies and practices adopted in the industry, lies the very understanding that algorithms are biased because they have been fed by ‘bad data’ and hence in order to rectify algorithmic bias, companies need to train algorithms with ‘unbiased data’.

These strategies and practices are in the best case scenario flawed and in the worst case scenario insincere. This is because there is no such thing as ‘unbiased data’. All processes of data collection require framing and processing. Trying to combat algorithmic bias by believing that we need to train algorithms with ‘good data’ that is fair and ethical, is a paradox of a sort, as it clearly shows that companies do not understand what bias actually is and how it operates. Rather than trying to defeat the bias we need to co-exist with it. Anthropologists have long been trying to grapple with the fact that individuals necessarily interpret real life phenomena according to their cultural beliefs and embodied experience (Clifford and Marcus, 1986), and that cultural bias necessarily translates into the systems that we build, including scientific systems (Latour and Woolgar, 1987). From an anthropological perspective, therefore, there is nothing that we can really do to ‘correct’ or combat our bias, because it will always be there. The only thing we can do, is to acknowledge the existence of bias through self-reflexive practice and admit that the systems, representations and artefacts that we build will never really be ‘objective’ or ‘fair’. This same understanding should be applied to our understanding of AI systems and automated decision making.

By Veronica Barassi