The launch of ChatGPT and other generative AI tools has brought increased attention to AI in fields like science, media, and society. However, only a few studies have analysed how the public perceives and accepts AI in journalism. Therefore, the Research Centre for the Public Sphere and Society at the University of Zurich conducted a comprehensive survey in July 2023 to understand audiences’ views on using AI in news production.
Key findings from the study, which included 1,254 participants from the German-and-French-speaking regions of Switzerland, indicate that the Swiss population has reservations about fully or partially AI-generated journalistic content. Only 29% of respondents expressed a willingness to read content generated entirely by AI, whereas 84% were interested in news content produced without AI.
However, acceptance varied by topic. For example, routine news about the stock market or weather forecasts had the highest acceptance rate at 61%, followed by soft news about “stars and celebrities” at 49%. Conversely, there were lower acceptance rates for hard news topics such as culture, 28%, and science, 26%, with just 16% acceptance for both national and international politics.
A Risk to Trust
The study also revealed that large segments of the public (61%) perceive AI as negatively impacting the quality of news, with 67% expressing concerns that AI could exacerbate the spread of misinformation. Those surveyed broadly agreed that publishers should clearly label articles produced with AI assistance. This suggests that failing to do so could erode trust in journalism.
Furthermore, the poll showed a reluctance to pay for AI-generated journalism. Only 9% of respondents were willing to pay for content fully created by AI, while 65% said they would pay for articles produced by human journalists. Most participants, 73%, believed that media companies predominantly employ AI to reduce costs. These findings indicate that increasing AI usage in journalism could negatively affect the public’s willingness to pay for content.
Overall, our study suggests that the Swiss population is sceptical about AI’s use in journalism. This highlights the importance of ongoing research to monitor shifts in public opinion. It also demonstrates the need for transparency when using AI in journalism. In this regard, mere declarations of AI usage, such as labelling content as “assisted” or “fully” created with AI, are insufficient. We propose John Carroll’s “explanatory” transparency approach outlined in Why should humans trust AI?. It involves detailing how and at what stages AI was utilised in the journalistic process, its purpose, and how results were verified.
To facilitate this transparency, we suggest the creation of brief methodological factsheets on how to account for AI contributions to journalistic content. This approach allows journalism to differentiate itself from unscrupulous outlets that misuse generative AI, such as ChatGPT, for content plagiarism.
Focusing on transparency is essential to countering the prevailing belief that AI harms journalistic quality when, in fact, it can, as suggested by Bibo Lin and Seth Lewis, serve as a quality assurance tool when used correctly. For example, automated data analysis through AI offers new possibilities.
In the study, we advocate for press councils and industry associations to develop clear guidelines for AI’s use in journalism. While AI policies exist in some media companies in Switzerland and in other countries, there’s a pressing need for industry-wide standards.
Read the full report here.
Opinions expressed on this website are those of the authors alone and do not necessarily reflect or represent the views, policies or positions of the EJO or the organisations with which they are affiliated.
If you liked this story, you may also be interested in: Community radio: young South Africans are helping shape the news through social media