The Human Error Project

Aim, Questions and Visions

We are living in a historical time when every little detail of our experience is turned into a data point that is used by AI systems to profile and make automated decisions about our lives. Increasingly more these technologies are used worldwide.

Health and education practitioners use them to ‘track risk factors’ or to find ‘personalized solutions’. Employers, banks, and insurers use them to judge clients or potential candidates. Even governments, the police and immigration officials use these technologies to make decisions about individual lives, from one’s right to asylum to one’s likelihood to commit a crime. The COVID-19 pandemic has only intensified and exacerbated these practices of technological surveillance, algorithmic profiling and automated decision making.

In different sections of society algorithmic profiling is often understood as holding the key to human nature and behavior; it is used to make the process of decision making more efficient, and to ‘avoid the human error’. Paradoxically, however – as recent research has shown – these technologies are filled with systemic ‘errors’, ‘biases’ and ‘inaccuracies’ when it comes to human profiling.

Of course, AI systems can bring much positive outcomes and this is clear if we consider their use in tackling specific issues such as diseases or climate change. Yet, when it comes to human profiling these technologies cannot grasp the complexity of human experience and behaviors, and their errors can have a real impact on individual lives and human rights.

In 2020, we launched The Human Error Project: AI, Human Rights, and the Conflict Over Algorithmic Profiling, because we believed that – in a world where algorithmic profiling of humans is so widespread – critical attention needs to be paid on how institutions, businesses, and individuals coexist, negotiate and construct meaning out of AI errors.

In our research we use the term ‘the human error of AI’ as an umbrella concept to shed light on different aspects of algorithmic fallacy when it comes to human profiling: Bias – AI systems are human made and will always be shaped by the cultural values and beliefs of the humans and societies that created them. Inaccuracy – AI systems process data. Yet the data processed by algorithms is often the product of everyday human practices, which are messy, contradictory and taken out of context, hence algorithmic predictions are filled with inaccuracies, partial truths and mis-representations. Un-accountability – AI systems lead to specific predictions that are often un-explainable, and unaccountable. How can we trust or challenge their decisions, if we cannot explain or verify them? The combination of bias, inaccuracy, and lack of transparency in algorithmic predictions, we believe, implies that AI systems are often (if not always) somehow fallacious in reading humans.

The Human Error Project thus shares much of the same understandings of current research in the field of critical AI and data studies that has shown how AI systems are often shaped by systemic inequalities (Eubanks, 2018; Amoore, 2020; Crawford, 2021), by racial biases (Noble, 2018; Benjamin, 2019; Richardson et al. 2019; Atanasoski and Vora, 2019; Amaro, 2021) and by inaccurate and human reductionist analyses of human practices and intentions (Barassi, 2020; Milan, 2020).

Yet we also want to push the debate further, and ask What next?. We want to question what happens when different actors in society realize that AI systems can be fallacious and biased in reading humans; when they discover that also AI systems can be racist, sexist, ageist, ableist and so on? How are different sections of society understanding and shaping the political debate on the Human Error of AI? How are they negotiating and coexisting with the human rights implications of AI? What solutions and AI futures are different actors envisaging?

We launched The Human Error Project because we believe that one of the most fundamental questions of our times has become that of mapping, studying, and analyzing the emerging debates and conflicts over AI errors and algorithmic profiling. With this project we position ourselves amongst those scholars that have called for an analysis of the ‘political life of technological errors’ (Aradau and Blanke, 2021) and for a qualitative approach to the understanding of algorithmic failures (Munk et. al, 2022; Rettberg, 2022).

Our aim is to map the discourses and listen to the human stories of different sections of society, to try and understand how AI errors – when it comes to the profiling of humans – are experienced, understood and negotiated. To achieve our goals, The Human Error Project Team is researching three different areas of society where these conflicts over algorithmic profiling are being played out in Europe: the media and journalists; civil society organizations and critical tech entrepreneurs. For all these different sections of society we are gathering data primarily through three main methodologies: critical discourse analysis, organizational mapping, and the collection of 100 in-depth interviews.

Our methodological approach, as we will clarify here below, is based on the understanding that whilst most of current research and influential journalism in the field of critical AI studies comes from the U.S. and focuses on algorithmic injustice with reference mostly to U.S.-centric systems of inequality, European countries (within and outside the E.U.) and their cultural specificities are an equally interesting field of analysis for studying the ways in which the debate on AI errors, algorithmic profiling and human rights is being shaped.

Our Team will be working on different interconnected research projects:

Prof. Veronica Barassi will be leading a 2-year-long qualitative investigation – based on critical discourse analysis and in-depth interviews – into the conflicts over algorithmic profiling in Europe, which is funded by the HSG Basic Research Fund.

Dr. Antje Scharenberg will be working on a postdoctoral research project investigating the challenges of algorithmic profiling for human agency.

Dr. Philip Di Salvo will be working on a postdoctoral research project dealing with journalists covering issues of AI errors and algorithmic profiling.

Ms Rahi Patra will be focusing her PhD research on health surveillance technologies, algorithmic bias and their implications on human rights and privacy.

Ms. Marie Poux-Berthe will be working on a three-year PhD Research on misconstruction of old age and AI, and digital media.

We believe that the understanding of human errors in algorithms has become a top priority of our times, because they shed light on the fact that the race for AI innovation is often shaped by stereotypical and reductionist understandings of human nature, and by new emerging conflicts about what it means to be human.

If you want to find out more about our research, follow our Research Diary, where we publish our theoretical reflections, findings and news of the project. Feel also free to get in touch if you want to collaborate, you will find our details here.