“Machines That Fail Us”, Episode 1: Making sense of the human error of AI
The Human Error Project team is delighted to announce the publication of “Machines That Fail Us”, a podcast series based on the research work of our project. With five episodes released monthly starting today, the podcast will discuss the main results of The Human Error Project academic journey. To do so, we will be joined by the voices of some of the most engaged individuals working at the crossroad of atificial intelligence and human rights: with them we’ll discuss the most recent developments in the debates around AI errors and algorithmic profiling. The “Machines That Fail Us” podcast series aims at the lay public and at communicating the academic side of our work to non-specialist audiences with an interest about the interconnections of AI with broader societal issues.
The “Machines That Fail Us” podcast is made possible thanks to a grant provided by the Swiss National Science Foundation (SNSF)’s “Agora” scheme. The podcast is produced by The Human Error Project Team in cooperation with the Communication office of the Universität St. Gallen (HSG) and postproduction is curated by Podcastschimiede. Philip Di Salvo, who works as researcher and lecturer in the HSG’s Institute for Media and Communications Management and is part of The Human Error Project since 2022, will be the main host of the podcast. Episodes will be released on the HSG website, on The Human Error Project website and all major audio and podcasting platforms.
Machines That Fail Us #1 | Making sense of the human error of AI
In the first episode of “Machines That Fail Us” we make sense of what AI errors are and how they’re already impacting on the lives of many individuals and groups of our society. AI systems and algorithms are not infallable and objective as we often describe them and they’re prone to show biases, discriminatory outcomes and inaccuracies especially when they profile humans. Additionally, when AI and algorithms do make mistakes, holding these machines accountable can be very difficult or even impossible as they are coded and operate in non transparent ways. All these errors, we argue, are not mere technical glithces, but something more profound and systemic that interconnects with broader and pre-existing societal issues. What are the implications of AI errors for society and what do they tell us about our future with artificial intelligence? In the first episode of “Machines That Fail Us”, The Human Error Project team introduces the concept of “the human error of AI” and the main results emerging from the research. With Veronica Barassi, Antje Scharenberg, Rahi Patra, Marie Poux-Berthe. Host: Philip Di Salvo.