We are delighted to release today the first report of The Human Error Project!
The report, titled “AI Errors and the Profiling of Humans: Mapping the Debate in European News Media“, sheds light on how the debates about algorithmic profiling and AI errors are being shaped in European news media and on how controversial data practices, technologies and policies are framed and mediatized.
News media are a key field of interest when we want to ‘map the debate’ on the arrival of new technologies and their implications. As the sociologist Pierre Bourdieu (1991) has shown, news media can be understood as the center of symbolic power in our societies, and they can have a fundamental role when it comes to construct our sense of reality. Research has shown that this is particularly true with reference to emerging technologies (Roessler, 2001; Weaver et al., 2009) including AI (Brennen et. al. , 2018; Ouchchy et al, 2020; Sun and Zhai, 2022; Shahik and Moran, 2022).
During February 2020 and February 2022, thus, we decided to carry out a longitudinal, cross-cultural discourse analysis in European news media. For our research we focused on three core countries: Germany, France, and the UK and analyzed 520 articles across 15 general interest newspapers. We have selected these three countries, not only because they are the biggest economies in Europe and leaders in the race for AI innovation, but also because our research team covers these respective languages, which allowed us to capture the nuances and cultural specificities of media discourse. The research has also been enriched by a further contextual analysis of articles coming from other countries.
In our analysis we were interested in shedding light on the juxtaposed, contradictory narratives that are shaping the debate around AI errors, algorithmic profiling, and human rights. Overall, the report discusses an array of case studies and instances that offer an overview of how different AI systems misunderstand and mismeasure humans and how they get covered by European journalists.
In the report we have divided the different articles studied in two different parts:
- The first part titled “The Errors in the Technologies that Profile Us”, we focus on the coverage of errors, inaccuracies, and biases in facial, speech and emotion recognition technologies.
- The second part, titled “The Impact of AI Errors in Society”, we instead focus on the articles that covered how AI errors are impacting on different areas of society such as: a) employment and work b) crime and policing c) health d) social media censorship.
You can find the full report here below, and please do not hesitate to contact us directly if you have any questions.
Barassi Veronica, Scharenberg Antje, Poux-Berthe Marie, Patra Rahi and Di Salvo Philip (2022), “AI Errors and the Profiling of Humans: Mapping the Debate in European News Media”, Research Report (I) The Human Error Project: AI, Human Rights and the Conflict over Algorithmic Profiling, University of St. Gallen, St. Gallen, Switzerland.