Civil Society’s Struggle Against Algorithmic Injustice in Europe

by Antje Scharenberg

Markus Spiske / Unsplash

Today, we are delighted to share the second research report of the Human Error project, entitled “Civil Society’s Struggle Against Algorithmic Injustices in Europe”!

The report sheds light on how European civil society organizations negotiate issues of AI error, how they understand key problems, and how they take action against the algorithmic injustices that AI systems perpetuate.

With this report, we contribute to a vital and ongoing academic and public discussion around how AI systems influence the work of civil society actors in the digital age, and how they struggle to holds these systems accountable (e.g. Milan, 2015; Bonini and Treré, 2024). We agree, here, with other scholars who have demonstrated that debates around AI have to be understood in the context of wider issues of inequality (e.g. Eubanks, 2018) and social justice (Dencik et al., 2016), and how these issues intersect with and are perpetuated by algorithmic technologies.

What our research contributes to these important, ongoing discussions is a better understanding of the everyday negotiations of AI error from the perspective of civil society, and of just how complex and difficult it is to install algorithmic accountability. At the same time, while civil society faces a number of technological, discursive and political challenges, the report also demonstrates that resistance is possible, pointing to successful examples of algorithmic justice.

Overall, the report is divided in two parts, shedding light on:

  1. The European landscape of civil society organisations working on digital rights and issues of algorithmic injustice: who are some of the key civil society actors taking action against algorithmic injustice in Europe, what is their mission and how do they act?
  2. How actors negotiate the everyday struggle against AI errors – and what they believe can be done about it: what are the main issues connected to AI errors and algorithmic profiling? How do civil society organizations in Europe understand ‘the human error of AI’?

You can find the full report here, please do not hesitate to contact us directly if you have any questions.

Research report:
Scharenberg Antje, Barassi Veronica and Di Salvo Philip (2024), ‘Civil Society’s Struggle Against Algorithmic Injustice in Europe’, Research Report (II) The Human Error Project: AI, Human Rights and the Conflict over Algorithmic Profiling, School of Humanities and Social Sciences and MCM Institute University of St. Gallen, St. Gallen, Switzerland.