A lot is being said about what AI is and isn’t. Is it applied statistics on steroids? Is it a form of new soon-to-be digital overlord capable of destroying humankind? Consequently, how should we, humans, relate to the technology? These questions all point in one direction: the social shaping of AI and the role we want to attribute it in our society in the future is currently being negotiated piece by piece by various sections of our society.
On the one side, tech entrepreneurs and policy makers have shown the tendency of looking at AI as something alien to our reality: a superpower, a form of intelligence of its own, something acting from outside our planet, capable of profoundly changing it, or even destroying it. These narratives are deeply problematic, as they end up obfuscating serious and urgent discussions about the real and already tangible impacts of AI systems, by hiding them under a veil of longtermist claims and existential, but entirely speculative, concerns. Additionally, these claims are often rhetorically grounded on the hypothetical – if any – creation of an artificial superintelligence, conscious and potentially hostile towards humans, whose existence is still only part of sci-fi narratives, and mostly probably will always be.
On the other side, researchers and civil society organizations have been pushing for more nuanced, critical, and alternative narratives of AI. They argue that AI systems and algorithms when used in real life contexts are showing several limitations and problematic outcomes: they can discriminate, they are indeed biased and are prone to accelerate and exacerbate at scale already existing inequalities and social issues. A recent investigation by the US-based news outlet The Markup, that specializes in algorithmic accountability, has shown how predictive policing systems, some of the most hyped AI applications, fail spectacularly when it comes to predict crimes. Other systems, including facial recognition and algorithms used in public administration, have shown similar problems as well. To sum: AI is fallacious, biased and all but neutral and always will be, especially when it comes to reading humans and their behaviors.
As our research results show, media outlets often tend to favor sensationalist claims about the powers of AI, obfuscating more realistic reflections on the actual errors that these systems can be responsible for, thus also putting aside in-depth critical analysis of what AI is capable of. Yet, as the recent The Markup example also shows, these critical discussions have slowly started to emerge in the overall coverage. This is the symptom that AI is progressively becoming something subjected to the same push for accountability that other forms of power usually encounter. This is a fundamental step ahead.
At this stage, journalism is the site where we’re constructing the role this technology will have in our society. Starting from a social constructivist perspective, we could argue that AI is currently being shaped in the media and through this struggle of competing narratives that the media is hosting. We are doing this through a constant negotiation of views about what AI should and shouldn’t be or do. What is particularly interesting about journalism is that whilst this the site where public debate of AI is being shaped, this is also a field that is radically being transformed by these technologies.
This is why studying the biographical experiences of journalists and understand how they conceptualize AI and algorithms has become more crucial than ever. This year’s excellent “Future of Journalism” conference organized by the School of Journalism, Media and Culture (JOMEC) at Cardiff University in September 2023 was particularly fascinating. At the conference, papers about AI were a considerable part of the two-day program that brought together the state of the art of journalism studies. Attending the conference has been energizing and inspiring and the many excellent presentations let emerge a sense of urgency and readiness around issues such as generative AI and its potential clashes with news values and ethical journalistic stances; issues of disinformation and misinformation; automation of news work and editorial policies when it comes to adopting AI tools for producing journalism. These are indeed existential questions for journalism as a business, as a culture and as a fundamental actor in democracy.
Yet, it feels like journalism studies is currently looking at AI primarily from an internal point of view by inquiring about how AI systems will change the practice of journalism or how they will impact on the shape of journalism to come. Whereas I think these are crucial questions also for imagining the future of journalism itself, I also think this perspective should be completed by a broader analysis capable of better reflecting how journalism at large is coping with being the site where the struggle around AI is currently unfolding, as that struggle goes well beyond journalism itself. For doing so, an external point of view is also needed: one that can understand how journalists position themselves in that struggle, how they contribute to it and starting from which assumptions. For this reason we decided to carry out a new work-package of The Human Error Project.
After looking at civil society organizations and tech entrepreneurs, “The Human Error Project” has taken this research trajectory in the direction of journalism as well, focusing on how European reporters make sense of AI errors, how they negotiate meaning out of them and what their views about these issues at large are. We’re doing so by interviewing journalists based in Europe who are following the critical AI beat, aimed to expose, report on, and held accountable AI systems that fail or AI narratives with weak ties with reality. This beat is still a niche and an expert one even in tech journalism, but it is also the space where the struggle over AI is really taking shape, where “AI errors” serve as the starting point of broader discussions around our future with artificial intelligence.
When it comes to AI, it is a time for big questions for our society. Whether we’ll get to the most useful answer, it will depend on the outcome of today’s struggle over AI narratives in the media. Preliminary results from our research, that we also presented at the “Future of Journalism” conference in Cardiff, indicate that for journalists covering the critical AI beat, AI errors appear as extremely real and significant. In their views, AI errors today are potential anticipation of the future societal struggles that journalism will be called to report on. No doubt, those stories won’t be written using ChatGPT.