Misinformation: why it may not necessarily lead to bad behaviour

February 24, 2023 • Ethics and Quality, Recent, Research • by

Lies are nothing new.
durantelallera/Shutterstock

Magda Osman, Cambridge Judge Business School; Björn Meder, Health and Medical University; Christos Bechlivanidis, UCL, and Zoe Adams, Cambridge Judge Business School

“So far as the influence of the newspaper upon the mind and morals of the people is concerned, there can be no rational doubt that the telegraph has caused vast injury.” So said the The New York Times in 1858, when the transatlantic cable linking North America and Europe was completed.

The telegraph was assumed to be a means of spreading propaganda that would destabilise society. It was also seen as a vehicle used to disconnect people from the real world by introducing false ideas in their heads. Today, we might dismiss this as an irrational fear – a moral panic.

Go back further and there are examples of questionable information recorded and disseminated via information technologies available to the ancients – in clay, stone and papyrus. Fast forward to today, and the exact same concern exists around social media. So are we overreacting? We have interrogated the evidence suggesting that misinformation leads to bad beliefs and behaviour and found we might be.

The concern about misinformation is certainly growing. If you type “misinformation” into an academic search engine, you get about 100,000 hits between 1970 and 2015. In the past seven years alone, there are over 150,000 hits.

In Sweden, Australia, Canada, the United Kingdom, the United States, European Union, World Health Organization and the United Nations, there is intense research on the topic. This is linked to the introduction of laws, bills, task forces and units to block the spread of the misinformation virus. It seems the consensus is that misinformation is a problem, and a big one.

What drives this consensus? When we reviewed the research across a number of different disciplines – including sociology, psychology computer science, philosophy and media studies – we found the finger pointing at the evolution of the internet. The advent of social media has turned passive consumers of information into active producers and distributors. The result is unchecked and uncontrolled information that may boost beliefs in false claims.

This research suggests misinformation may lead to increased distrust in news media and governments or increased illiberal political behaviours, such as violent attacks on ethnic groups. Or that it may destabilise economic behaviours. After all, Pepsi’s stock fell by about 4% because a fake story went viral about their CEO, Indra Nooyi, allegedly telling Trump supporters to “take their business elsewhere”.

Yet, the presumed relationship between social media and such social unrest is frequently based on tacit assumptions, not direct empirical evidence. These assumptions commonly take the form of a causal chain, which goes like this: misinformation → bad beliefs → bad behaviour.

Such an oversimplistic causal relationship between beliefs and behaviour has been questioned in both philosophy and psychology. In reality, there’s a dynamic relationship between belief and behaviour – each can fuel the other in complex ways.

Hooded hacker person using smartphone in infodemic concept with digital glitch effect.
We often spot untrustworthy sources.
Shutterstock

In principle, people should be capable of assessing the quality of information and its source. After all, we have been dealing with lies and inaccuracies for millennia. And although advertisers can sometimes trick us, there’s no perfect model of how a particular communication channel with particular content can establish beliefs that will spur people to action on a large scale.

Blind spots in research

Just because a lot of researchers agree that there is an infodemic that is causing societal ills – distrust in institutions, for example – doesn’t mean that the issue is settled or that the evidence is secure. By combining a historical and psychological perspective, we discovered blind spots in this reasoning.

The causal chain described requires that we all agree on what misinformation is – and that this doesn’t change over time. But what happens when over time what is initially labelled as misinformation becomes information, or information becomes misinformation? Galileo’s 1632 challenge of the geocentric astronomical model, which assumed the Earth was at the centre of the solar system, is a classic example. Despite the fact that he was right, the Catholic church did not officially pardon him for heresy until 1992. So, for several centuries Galileo’s truth was seen as misinformation.

A recent case concerns the origin of the SARS-CoV-2 virus: the possibility that it was developed in a lab was initially widely labelled a conspiracy theory, before subsequently being seen as a viable hypothesis.

These difficulties resonate with debates and disagreement about the definition of the term misinformation and related notions such as fake news and disinformation, with several proposals for definitions and characteristics in the scientific literature.

If there is no agreement on a definition of misinformation, it’s no surprise that there is no clear cut way to determine its role in shaping beliefs and, in turn, how those beliefs affect behaviour.

A second blind spot relates to the accessibility of information. Technological advances have not only given rise to new ways of accessing and sharing information. They also provide new opportunities for journalists, governments and researchers to analyse various forms of human communication at an unprecedented scale.

A common impression is that people on social media are going it alone in curating their own facts about the world, and that this is causing a perfect storm where there is mistrust in various institutions (news media, governments, science) and society appears fractured. But just because we have greater access to knowing the sheer volume of communication between people online doesn’t mean that it directly causes societal ills. We may merely be observing part of the fabric of human communication that has always taken place in market squares, pubs and family dinners.

There is still a case to be made about addressing misinformation. But it isn’t clear how regulatory measures designed to impede the spread of, say, misleading scientific claims would work. Regulatory measures are necessary to limit unethical research and practices, but if taken to extreme they can erode the foundations of democratic societies.

History shows us the problems with censoring ideas, which often backfires – leading in turn to even less trust in institutions. While there is no easy solution, the goal must be to adequately balance freedom of expression and democratic values against interventions designed to manage the fall out from misinformation.The Conversation

Magda Osman, Principal Research Associate in Basic and Applied Decision Making, Cambridge Judge Business School; Björn Meder, Professor of Psychology, Health and Medical University; Christos Bechlivanidis, Associate Professor – Experimental Psychology, UCL, and Zoe Adams, Research associate, Cambridge Judge Business School

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Opinions expressed on this website are those of the authors alone and do not necessarily reflect or represent the views, policies or positions of the EJO or the organisations with which they are affiliated.

If you liked this story, you may also be interested in: Study shows how Austria’s community broadcasters are supporting democracy

Follow us on Facebook and Twitter

 

Print Friendly, PDF & Email

Tags: , , ,

Send this to a friend