We are living in a historical time when every little detail of our lived experience is turned into a data point that is used by AI systems and algorithms to profile us, judge us and make decisions about us.
These technologies are used everywhere. Health and education practitioners use them to ‘track risk factors’ or find ‘personalized solutions’. Employers, banks and insurers use them to judge clients or potential candidates. Even governments, the police and immigration officials use these technologies to decide key issues about individual lives, from one’s right to asylum to one’s likelihood to commit a crime. The COVID-19 pandemic is only intensifying and exacerbating these practices of technological surveillance and profiling.
AI systems and predictive analytics are often used to make the process of data-driven decision more efficient and to ‘avoid the human error’. Yet paradoxically, as recent research has shown, these technologies are defined by intrinsic ‘errors’, ‘biases’ and ‘inaccuracies’, when it comes to reading humans which could lead to a variety of real-life harms and human rights abuses.
The Human Error Project: AI, Human Nature, and the Conflict over Algorithmic Profiling combines anthropological theory with critical data and AI research, and aims to investigate the fallacy of algorithms when it comes to reading humans by focusing on three different, albeit interconnected dimensions of human error in algorithms:
Algorithmic Bias– Algorithms and AI systems are human made and will always be shaped by the cultural values and beliefs of the humans and societies that created them.
Algorithmic Inaccuracy– Algorithms process data. Yet the data processed by algorithms is often the product of everyday human practices, which are messy, contradictory and taken out of context, hence algorithmic predictions are filled with inaccuracies, partial truths and mis-representations.
Algorithmic Un-accountability– Algorithms lead to specific predictions that are often un-explainable. The fact that most of the algorithms used for algorithmic profiling are un-explainable, makes them unaccountable. How can we trust their decisions, if we cannot explain them?
Our Team will be working on different interconnected research projects:
Prof. Veronica Barassi will be leading a 2-year-long qualitative investigation – based on critical discourse analysis and in-depth interviews – into the conflicts over algorithmic profiling in Europe, which is funded by the HSG Basic Research Fund.
Dr. Antje Scharenberg will be working on a postdoctoral research project investigating the challenges of algorithmic profiling for human agency.
Ms Rahi Patra will be focusing her PhD research on health surveillance technologies, algorithmic bias and their implications on human rights and privacy.
Ms. Marie Poux-Berthe will be working on a three-year PhD Research on misconstruction of old age and AI, and digital media.
We believe that the understanding of human errors in algorithms has become a top priority of our times, because they shed light on the fact that the race for AI innovation is often shaped by stereotypical and reductionist understandings of human nature, and by new emerging conflicts about what it means to be human.
If you want to find out more about our research, follow our Research Diary, where we publish our theoretical reflections, findings and news of the project. Feel also free to get in touch if you want to collaborate, you will find our details here.