When machines fail us: discussing the human error of AI

Prof. Stefania Milan during her keynote at the “Machines That Fail Us” conference

Artificial intelligence is increasingly becoming a part of our daily lives, taking on more decisions and tasks. However, it is important to remember that this technology can be prone to different biases and errors. This issue was the focus of discussion among concerned experts at the public conference “Machines That Fail Us” held at the University of St. Gallen on June 24th to mark the end of The Human Error Project.

The case of Robin Pocornie, a Black student, illustrates how artificial intelligence can exhibit racial biases. During the COVID-19 pandemic, Pocornie attempted to take an online exam at VU University of Amsterdam, where software was used to monitor students via their webcams for potential cheating. However, because the AI had primarily interacted with white individuals, it failed to recognize Pocornie, displaying the message “No Face Found” on her screen. She was only able to satisfy the AI monitoring system by shining a lamp directly on her face throughout the exam. This incident was one of many examples discussed at the “Machines That Fail Us” conference at SQUARE, which highlighted the dangers and injustices posed by AI technologies.

“Our technologies are designed in such a way that they can also be wrong. We cannot rely on them. One of the exciting questions is how tech developers and tech companies deal with the fallibility of AI and algorithms, and what the use of AI does to our own human judgment,” remarked Prof. Dr. Veronica Barrassi, who, along with her team from the HSG’s Institute for Media and Communications Management (MCM-HSG), organized the conference. In 2020, Barassi and her team launched “The Human Error Project,” a research initiative focused on these issues. At the conference, various team members presented publications from the project, which examined civil society’s efforts to combat algorithmic injustice in Europe and explored the media discourse on AI errors and human profiling. The discussions also addressed the power imbalance between tech companies and civil society and how AI often reinforces existing inequalities.

Humans cannot be standardized
At the conference, funded by the Swiss National Science Foundation through its Agora funding scheme, the keynote address was delivered by Stefania Milan, a professor of Critical Data Studies at the University of Amsterdam. She raised significant concerns about the growing reliance on surveillance software, highlighting it as a “regulating data infrastructure” that increasingly assumes roles once fulfilled by humans in state functions. Milan pointed out that the pandemic has accelerated the deployment of these infrastructures, often in ways that lack transparency and democratic oversight.

She also discussed several alarming issues, including the slow pace of AI regulation, the outsourcing of government tasks to for-profit entities, the challenges individuals face in opting out of AI applications, and the surge in energy consumption linked to data centers. Milan noted that some forecasts predict data centers could consume one-fifth of global energy by 2027. Additionally, she criticized “tech-solutionism” for its tendency to overlook societal complexities by designing solutions for a standardized average person, which is problematic because, as she emphasized, “people are not standardized.” She cited the early versions of Germany’s contact tracing app during the pandemic as an example, noting that it was only compatible with the latest smartphones, thereby excluding many users with older models. To tackle these challenges, Prof. Milan proposed three key areas for action: establishing a robust regulatory framework, developing ethical guidelines for technology development and use, and raising public awareness about the vulnerabilities of AI systems.

Bringing the resistance together
The subsequent panel discussion explored potential solutions to the technology trap described earlier. Entrepreneur Lorna Goulden suggested that additional technology could address the problem, proposing digital tools that empower individuals to control their own data and assist AI developers in creating responsible solutions that adhere to principles like transparency and privacy protection. Luca Zorloni, a journalist with the Italian edition of Wired magazine, emphasized the need for regulation and advocated for a statutory requirement to disclose algorithms that significantly impact the public sphere. Ilia Siatitsa from Privacy International highlighted the responsibility of developers to design solutions with the most vulnerable groups in mind first, only later expanding them to the general public.

Despite their varied approaches, all three panelists agreed that resistance to the negative impacts of AI is currently fragmented. Lorna Goulden stressed the importance of uniting organizations working on these issues to effectively challenge the influence of major tech companies, stating, “It is necessary for all organizations that deal with the topic to come together in order to be able to oppose the big tech organizations more effectively.”

You can re-watch the entire conference here:

You can listen to all the “Machines That Fail Us” podcast episodes here:

Blog post originally published by the Communications Office of the University of St. Gallen