From ChatGPT to crime: how journalists are shaping the debate around AI errors

April 25, 2023 • Digital News, Recent, Specialist Journalism, Technology • by

The Human Error Project examines how European news outlets cover AI errors and algorithmic profiling. Shutterstock image.

By Veronica Barassi, Antje Scharenberg, Marie Poux-Berthe, Rahi Patra and Philip Di Salvo

European news outlets have slowly started to shed light on the potentially harmful errors of artificial intelligence (AI). But the narratives around these technologies are still dominated by a sensationalist attitude towards the power of AI, which is obscuring other, more critical discourses. This attitude emerges clearly from journalism’s need to negotiate between human versus machine intelligence and human and machine errors, with a mix of awe and fear. These are the main conclusions of the first report from the Human Error Project, an initiative of the School of Humanities and Social Sciences at the University of St. Gallen, Switzerland. 

 

The study, “AI Errors and the Profiling of Humans: Mapping the Debate in European News Media”, analysed more than 500 articles published in France, Germany and the United Kingdom between February 2020 and February 2022. It documents how juxtaposed and contradictory the journalistic narratives shaping the debate around AI errors, algorithmic profiling, and human rights are in Europe. The report was published at a time when artificial intelligence, thanks to the viral success of ChatGPT and Midjourney, among others, is making headlines worldwide, causing a “storm of hype and fright”. It confirmed an emerging focus on the potentially harmful outcomes of AI and algorithmic profiling. This has allowed these technologies to be examined and held accountable – especially in relation to their errors, malfunctions, and problematic outcomes. 

A “magical” power

Our report also looks at how this scrutiny of AI is identifying more long-term concerns and questions. One particular issue it raised is whether “algorithmic technologies can really grasp the complexity and diversity of human experience”. This and other concerns about AI errors are being covered by a growing number of journalistic investigations, such as a recent one by Lighthouse Reports focusing on biases in algorithms used in making decisions about social welfare across Europe.

 

We based our analysis on news coverage of specific technologies and the impact of AI and algorithmic applications on crucial areas of our societies. First, we looked at how errors, inaccuracies, and biases in facial, speech and emotion recognition and profiling technologies are covered. We also examined news reports on the impact of AI errors on employment and work, crime and policing, health, and social media censorship.

 

In articles on profiling technologies, we found a tendency to follow the narrative that technology is a powerful entity that shapes social change and determines our future. The stories, therefore, focus on highlighting the differences between “the human” and “the machine” and the machine’s ability to go beyond human limits and to “fix” societal problems. There was, however, little or no discussion about the issues of potential error. Furthermore, even in the articles with undertones of fear, anxiety or concern about AI’s ability to manipulate and strip humans of their agency and autonomy, there was a prevailing fascination with its almost “magical” powers. 

Disappointing technologies

Our report argues that, while these themes are not surprising as they reflect some of the dominant cultural narratives on AI, they overshadow more critical considerations about AI biases, inaccuracies, or even failures. When these “errors” do end up being covered, this frequently happens through a frame that sees AI systems and algorithms as “disappointing technologies failing to meet expectations”, especially when it comes to their “unfixable” inability to read humans. 

 

The reasons these articles give for these errors differ. But journalists, especially in the UK, often blame them on other non-technological broader structural societal issues, such as systemic racism or normative understandings of people’s attitudes and behaviours. 

 

In relation to the impact of AI errors on society, different areas have been covered by the journalists included in our sample. One is the different stages of the professional life cycle – from recruitment to employment and redundancy – already influenced by AI. For instance, within news media discussions about AI at work, the following two frames were particularly interesting: algorithmic bias in AI-driven recruitment versus human bias; and algorithmic misjudgment, surveillance, and workers’ rights. 

 

A second common area of attention is using AI systems and algorithms in crime prediction and policing. Here, journalists have based their reporting on prominent case studies that have influenced the coverage across countries, such as adopting facial recognition or prediction practices to support law enforcement. 

Alarming surveillance and manipulation

 Journalists were alarmed about the more widespread surveillance that comes with these technologies, their potential manipulation by governments and intelligence agencies and their increasing tendency toward racial biases. In fact, results from our analysis show that “perhaps more than in other fields, the issue of AI error in crime prevention seemed to be taken more seriously” by European media, which have frequently called for human oversight. 

 

When adopting AI in health, the coverage focused on errors related to misreading and mismeasuring specific health conditions. Also, here, the discourse on AI errors in health was shaped by the understanding that most of these errors were caused by a lack of the “human factor” or linked to a lack of human intelligence, with only a few articles addressing the actual malfunction of the technology. 

 

Social media censorship of nudity has also been an area of extensive coverage in the sample we analysed. We found news articles focused on artistic representations of bodies that were wrongly censored online through algorithmic decision-making. One of the most prominent themes in this coverage was the portrayal of algorithmic decision-making as a menace to artistic freedom of expression. At times, journalists also focused less on what was being censored and more on what was being promoted by algorithmic logic. This included concerns such as patriarchal and sexist attitudes towards women. 

 

Overall, our research highlights the emergence of a debate in media spaces around AI, its errors and their societal implications. This “history of the present” approach, which aims to identify the historical forces behind present-day practices, enables us to understand how AI errors became a problem, catalyses discussions and debates and incites new reactions. Our research also shows how, at the moment, the place where humans negotiate their relationship with these systems and discuss and question them is also a space for mediation. In AI’s case, this is not limited to understanding certain technological implications. It also involves a wider and more profound discussion about what being human means and will mean in the future.

 

The report “AI Errors and the Profiling of Humans: Mapping the Debate in European News Media” is freely available on the Human Error Project website.

Opinions expressed on this website are those of the authors alone and do not necessarily reflect or represent the views, policies or positions of the EJO or the organisations with which they are affiliated.

If you liked this story, you may also be interested in: Dispelling the “green” AI myth: the true environmental cost of producing and supplying digital technologies

Follow us on Facebook and Twitter.

Print Friendly, PDF & Email

Tags: , , , ,

Send this to a friend