Reporting AI Errors? How News Media Sensationalizes AI Fallacy in Human Profiling

by Rahi Patra

AI systems and algorithmic logics are never 100% accurate. Even with a 97%  accuracy there is always a 3% margin of errors and these errors – when applied to the algorithmic profiling of humans – can have dire effects on individual lives and human rights. The question at heart is how do we make sense of these errors in our increasingly AI-driven societies? How is the discourse shaped? What solutions are we envisaging?

One useful place to start to address these questions is of course the media. Different scholars have shed light on the role news media play in shaping public understanding of AI technologies. Sun et al. (2020) for instance have shown that journalists often act as ‘translators of knowledge’ when it comes to AI technologies. News media have also been influential in shaping public discourse on the ethical implications of these technologies (Ouchchy et al., 2020) and this is particularly true if we consider political debates about the implementation of facial recognition technologies and its democratic implications (Shaikh & Moran, 2022).

There are interesting insights emerging from this body of literature. Shaikh and Moran (2022) for instance have shown that in the US media coverage on the risks and opportunities of AI technologies in the US was influenced by clear ideological positions held by left-wing and right-wing media outlets. Furthermore, a study conducted by the Reuters Institute for the Study of Journalism (RISJ) in 2018 revealed that the debate surrounding these issues is frequently dominated by industry and corporate voices (Brennen et al., 2018).

Influenced by similar understandings about the importance of media in shaping public perceptions of AI, over the last years we conducted a longitudinal, cross-cultural discourse analysis of how news media in Germany, France, and the UK in the period between 2020-2022 covered the issue of AI errors in algorithmic profiling. Our approach was largely based on qualitative, critical discourse analysis methodology (van Dijk, 2015), which was aimed at highlighting the multiple, contradictory, context-specific and complex narratives that appeared within media discourse.

It was influenced by the belief that media power is a subject of great complexity and controversy (Curran & Seaton, 1997, p. 270), and that the media rather than imposing top-down forms of meaning construction, often becomes the space where meanings in society are transmitted, negotiated, and contested (Hall, 1997). Our findings have been published in a research report and discussed in a conference paper that we presented at the International Association for Media and Communication Research (IAMCR) conference in Lyon, in July 2023 , titled: “Reporting on AI Errors – The Sensationalization of Human/Machine Boundaries in European news media coverage of Algorithmic Profiling”, and have now made it into a full journal article that we just submitted for review (fingers crossed!).

Our paper shows that the news media coverage of AI is becoming a fertile ground for questioning the capacity of algorithmic technologies when it comes to truly comprehending the intricate nuances and vast diversity of human experiences. However, our research in this period and in three different countries, also reveals a tendency in news media coverage that was defined by a sensationalist fascination with the power of AI and narratives of awe and fear. In this framework, AI errors were often discussed in a game of mirror with human error, and a constant destruction and reconstruction of the boundary between humans and machines. These forms of coverage then overrode more in-depth and critical discourses on the relationship between AI fallacy and structural injustices, on the human rights impacts of AI technologies for our society and our democratic futures.

While the discourse around bias and errors with regards to structural inequalities, human rights and our democratic futures remained limited in news media, the question of algorithmic (in)capability in grasping the ‘pluriverse’ of human experience remained at the core of journalists’ interpretations. This narrative was particularly fascinating as the news media became a space for exploring the difference between human and machine and the importance of human oversight and keeping the human in the loop.

The paper was also a great opportunity for us to theoretically conceptualize how we approach ‘errors’ when we talk about the fallacy of AI in reading humans. One interesting aspect of the emerging body of literature on AI failure, is that there is lack of a theory of errors. This is evident not only if we consider the AI safety literature where there is a lack of theorisation on failure (Brundage et al., 2018; Scharre, 2016; Amodei et al., 2016; Yampolskiy, 2018) but also if we consider the works of critical scholars like Noble (2018), Benjamin (2019) and Broussard (2023) who focus on the term ‘glitch’, as understood in the STS literature, to shed light on the biases and structural inequalities of our technologies, but they do not engage with a theory of errors.

In our work, we combine the theory of errors in philosophy (and especially Indian Philosophy) and anthropological approaches, and discuss the importance of understanding ‘the human error in AI’ as an analytical and methodological tool which enables us to appreciate the relationship between AI technologies and processes of knowledge production and how AI systems are pervaded by biased, unaccountable and human reductionist understandings of what it means to be human.