Accountable AI?

The computer says no. Why? The computer says so. One of the biggest challenges that we face today as individuals is represented by the fact that – in the majority of cases – the algorithmic predictions that are used to profile us and to make data driven decisions about our lives cannot be explained. In her book Weapons of Math Destruction O’Neil (2016) shows how algorithmic models that are used in a variety of fields such as insurance, policing, education and advertising are opaque, unregulated, and uncontestable, even when they are wrong.

This same understanding is also shared by the computer scientist Dan McQuillan in his piece about ‘algorithmic seeing’. McQuillan (2016) believes that algorithms are the ‘eye’ of big data; they are what gives meaning to the mass of information, but he also argues that algorithmic seeing is oracular rather than ocular (2016:3). We are asked to have faith in algorithmic predictions, like some people have faith in oracles, despite the fact that these predictions cannot be explained and hence cannot be held accountable. Algorithmic predictions are thus obscure, opaque and cannot be explained. Yet if algorithmic predictions are unexplainable and are defined by multiple obscure variables, then how can we guarantee that the type of predictions and decisions that they lead to are fair or accurate?

These were some of the questions addressed by the Women Leading in AI Network webinar which took place in the beginning of November 2020. The aim of the network, which was founded by Ivana Bartoletti, author of An Artificial Revolution: on Power, Politics and AI (2020), in 2018 is to function as an international lobby group of women advocating for a responsible AI.

On this occasion in the midst of the still ongoing Covid-19 pandemic, the network brought together a variety of speakers from different national contexts and disciplines. Veronica Barassi, principal investigator of the Human Error Project, spoke alongside Carina Prunkl, Research Fellow at the University of Oxford’s Institute for Ethics in AI and a research affiliate at the Centre for the Governance of AI; the criminologist, community scholar and AI ethics expert Renée Cummings; and Clementina Barbaro, Co-Secretary of the newly established Ad Hoc Committee on Artificial Intelligence of the Council of Europe.

The debate was kicked off by stressing why a more ethical and accountable AI is needed in the first place. As Professor Barassi explained, we are not only digital citizens today – able to use digital environments for political expression – but datafied citizens. The datafied citizen, Professor Barassi argued, “is no longer an agent in the narratives made about him or her”. Instead, today, it is algorithms that speak for and about us, making decisions that are inevitably biased and flawed. Thus, for Professor Barassi, AI cannot be fixed. Rather, we have to ask “how do we coexist with it”?

The lack of human agency in the context of increasingly automated decision making through algorithms was similarly highlighted by other speakers. Dr. Prunkl, for instance, highlighted the issue and different understandings of “human autonomy”.

Renée Cummings related the question of lacking agency and control over algorithmic decision making to the political responsibility of big tech and how companies are the main actors defining the future of AI: “In the US, the politicians don’t speak about AI. It’s the big tech who are making the politics regarding AI!” In the context of Europe, Clementina Barbaro highlighted that even the involvement of multiple stakeholders in a Committee on AI has its limits. Most legal frameworks, she claimed, are not tailored to AI, although there is potential for global AI regulation.

What, then, might be possible paths towards a more accountable AI? The speakers’ various backgrounds resulted in four different avenues towards AI accountability. Firstly, Renée Cummings highlighted the importance of community power. What is, required rather than a mere focus on legal frameworks, would be awareness raising work and community activism: “Taking knowledge to the streets to empower communities.” Cummings understands data not just as conversation but as a “lived experience” and insisted that “communities have the rights and the power to defend themselves.”

Besides the level of community empowerment, the other speakers also highlighted more institutional actors that might be able to hold AI accountable. Clementina Barbaro, for instance, stressed that more than about legal frameworks and human rights, regulation must also take into consideration “practical instruments like certifications or controls by third parties”.

Similarly pointing to the limits of legal instruments like the GDPR, Professor Barassi thirdly raised the question of tech companies’ responsability and how radical tech actors might contribute to a more ethical AI.

Dr. Prunkl finally brought up the issue of technological infrastructures themselves, demonstrating how ethical practice might be enabled not only through institutional mechanisms such as audits, but built in at the level of software and hardware development and use.

One of the points that all speakers agreed on, is that these routes are still only the beginnings of a conversation around AI accountability. As Professor Barassi highlighted, more than a debate, what we are witnessing here is an ongoing and longer-term societal conflict arising in the context of surveillance capitalism.