Dos and don’ts for media covering non-peer-reviewed research

October 29, 2020 • Recent, Specialist Journalism • by

On September 30, the New York Times covered a new report by the Cornell Alliance for Science on COVID-19 misinformation. The study suggested President Trump was the “single largest driver” of coronavirus misinformation and “the largest driver of the ‘infodemic'”. The article was widely shared on social media and presented as evidence for a suspicion long held by journalists, pundits, academics, and parts of the public.

As communication scholars who have written extensively on both journalism and COVID-19 misinformation, we have serious questions and concerns about the research method employed, the claims made in the report; but, more importantly, how these were covered by the New York Times.

The main problem is that the methods supported neither the claims in the report, nor those described in the New York Times article. While the assertion that President Trump is the “single largest driver” of coronavirus misinformation intuitively feels true, the scientific burden of proof for such a statement is high. This was not met by the study in question.

As researchers rush to make sense of the upcoming American presidential election, we will no doubt see many more of these reports which are published without being reviewed by others in the same field (known in academic circles as ‘non-peer-reviewed’).

We don’t generally object to this practice as long as certain standards are met. But social science coverage can and should be better. The New York Times’ coverage of the Cornell Alliance for Science report offers a good opportunity to consider how news outlets should be covering non-peer-reviewed social science – and science more broadly.

Expect the unexpected

News outlets have an important responsibility to take care when covering non-peer-reviewed research. This is particularly true in cases such as the New York Times article, where a single report or study is presented as ground-breaking, or sweeping conclusions are drawn from a single source rather than from a larger body of work.

In covering the Cornell Alliance for Science report, the New York Times seemingly made little effort to qualify, assess, compare, or verify its findings. The report claims to offer empirical evidence for something that seems true. Perhaps that is part of the problem.

The reality is that data often surprises us. This can make research equally exciting and frustrating. It is as important, then, to verify expected results as it is to validate unexpected ones. This is in part why peer review remains important.

Avoiding flawed conclusions

At the end of the day, all science is social. The social processes of collaboration, replication, and validation are fundamental to what makes science work. Peer review, for all its flaws and shortcomings, makes research better. Furthermore, there is a greater risk of inaccuracy and flawed conclusions when we expedite the publishing of findings for a wide audience at the expense of these processes.

We are not suggesting that journalists should be the ones to review and critique research reports. Many lack the training, space, or time to do so. But, at a minimum, journalists must critically interrogate the studies they cover. If they are not able to do so, they should outsource the task to someone who can. This is an established and effective practice in science journalism and should be applied across the media.

This is not to say that either non-peer-reviewed research or mainstream reporting of that research is necessarily bad. We have seen first-hand how both can go right. Some non-peer-reviewed research involves rigorous analysis, internal review and a commitment to ensuring the report is meticulous, accurate, and accompanied by a clear indication of its limitations. We have seen coverage that does a good job of contextualizing and evaluating research findings and claims.

The dos

Given what promises to be a busy season of election and COVID-related reports, here are a few recommendations for news outlets on covering non-peer-reviewed research.

  • Proceed carefully when making grand cause and effect claims based on a single non-peer-reviewed report. The same is true for peer-reviewed work. In both instances, you should consult outside experts who can assess its quality and ensure it is grounded in broader research.
  • Assess the scope and effectiveness of a study’s methods. This should involve interviewing independent experts who are familiar with the approaches and topics covered and who are able to examine and provide context for findings.
  • Make it clear to readers that the research is not peer-reviewed and properly qualify the results reported.
  • Be careful with your choice of headline and your cause and effect links. The hurdles for establishing causes in the social sciences are very high. Reporting should reflect these standards.
  • Employ journalists with training and/or experience covering the social sciences.
  • Examine the intrinsic news value of the report and question whether you are only covering it because of the reputation of its source.

Considering the challenges and pitfalls of reporting research, we believe there is a need for a new platform or digital outlet that solicits and collects reviews of white papers and non-peer-reviewed reports. In addition to helping assess, contextualize, and qualify reports that often draw significant media attention, a platform like this will help us make sense of and draw meaningful and more widely applicable conclusions from research.

 

Image sourced from Maxpixels

Opinions expressed on this website are those of the authors alone and do not necessarily reflect or represent the views, policies or positions of the EJO or the organisations with which they are affiliated.

If you liked this story, you may also be interested in: How three US news outlets framed the pandemic

Follow us on Facebook and Twitter

 

Tags: , , , , , , ,

Send this to a friend