How Safe Are The European Elections From The Threat Of Disinformation?

February 13, 2019 • Digital News, Recent, Specialist Journalism • by

EU flagEurope goes to the polls at the end of May. With so much at stake in these elections, many fear a repeat of the massive disinformation campaigns that affected the results of the Brexit referendum and the US presidential elections in 2016.

In an effort to prevent the democratic process from being subverted yet again, in 2017 the EU Commission launched a campaign to tackle the threat of fake news and disinformation. The Commission’s attention has mainly been focused on the Facebook-Google “duopoly”, and at the end of January it released progress reports produced by the tech giants themselves.

At the same time, the Commission said that while these and other online platforms had taken some steps to comply with their commitments to tackle the problem, they needed to make even greater efforts to stop the European elections from being undermined by the threat of disinformation.

The “implementation reports” produced by the online platforms (Facebook, Google, Mozilla and Twitter) focus on the measures they have taken to fulfil their commitments under the Commission’s  Code of Practice on Disinformation. The big two, Google and Facebook, adopted a very similar approach to each other in their reports. Their findings relating to the main areas of concern (advertisements, fake accounts and managing algorithms) can be summed up as follows:

• Advertisements

Facebook has pledged to implement greater transparency and to make an archive of political advertisements available to the public. In addition to classical party advertisements, the archive will also include “issue ads”, or advertisements that deal with controversial political topics. This feature has already been launched in the United States, where the “issue ads” cover topics such as abortion or migration. In its implementation report, Facebook says that it plans to launch the transparency and ad archive features in Europe in advance of the EU elections. However, it has not yet specified a launch date.

Google also aims to provide a searchable archive of advertisements and to ensure a higher degree of transparency over political advertising. Like Facebook, Google has not yet launched this feature in Europe but intends to do so before the European elections.

Fake Accounts

Facebook pledges to remove anything it classifies as “inauthentic behaviour” (fake accounts, reissued blocked accounts, etc.) from the network. Facebook estimates that 3-4 % of all accounts are fake. Given a total number of 2.2 billion active users, this suggests that there must be 88 million fake accounts. This issue is by no means new: Facebook accounts that cannot be traced back to an identifiable individual were a problem long before 2016.

Google is also taking action against fake accounts and attacks that target its services – such as the creation of accounts, registration for Google News or Black Hat-SEO – in an effort to prevent search results from being manipulated. It is in the platform’s interest to be seen to be engaging in this kind of effort, as it helps to create an environment in which users feel comfortable. According to Tarleton Gillespie, this type of content moderation should be one of the basic services provided by digital platforms.

Managing Algorithms

Facebook says that it prioritises trustworthy sources in its newsfeed, giving lower precedence to misleading content and presenting it to users less often. But the question remains why dubious posts, which the platform itself has classified as misleading, are presented at all.

Google is planning to display more fact-checks in its search results and is working on the development of “credibility signals”, which should ensure that the quality of contents is more clearly labelled. These efforts on the part of Google and Facebook are not new. However, a recent statement from the Google subsidiary YouTube, which has acquired a reputation as a haven for conspiracy theorists of all kinds, shows that it too is taking steps to address the issue. The YouTube blogpost states that in future, “borderline” and misleading content (including conspiracy theories around the 9/11 attacks, “flat earth” claims and videos promoting phoney miracle cures for serious illnesses) will be recommended less frequently. Time will show how much impact these changes in the management of algorithms will have. YouTube does not say exactly how it plans to identify such misleading content, though it does say that “human evaluators” will help to train the machine learning systems that generate recommendations. In any case, YouTube claims that only one percent of its videos will be affected by the shift in policy.

Room For Improvement

So, the tech giants are making efforts to combat disinformation – partly in response to political pressure, but also because they expect some economic benefits. But is this enough?

Mariya Gabriel, the EU Commissioner in charge of digital economy and society, feels that still more could be done.

At a conference devoted to “Countering Online Disinformation” held by the EU Commission in Brussels on 29 January, Gabriel pointed to a number of weaknesses in the big tech platforms’ response. She criticised the fact that the measures will not apply equally to all EU member states. She also complained that the pace of change was too slow and that efforts to improve the transparency of advertising on the platforms do not go far enough. For example, she noted that independent researchers do not have adequate access to the platforms’ data. Her criticism raises some important questions about the audience for digital platforms and how users respond to targeted disinformation.

During the conference, representatives of the platforms insisted that they set great store by transparency and that that they were striving to ensure that users could access more information (including information about information). However, it is a moot point whether more information will solve the problem, or whether it will actually increase the level of confusion. Philip M. Napoli has discussed this question in detail (“What if more speech is no longer the solution?”). He concludes that if they are to fulfil their social responsibilities, platforms should be doing more to promote counter-arguments.

Unchecked Power of the Platforms?

The emphasis on media literacy, transparency and access to information also diverts attention away from other important considerations – for example from the fact that the digital landscape is increasingly dominated by the big platforms. The digital public faces a huge dilemma: the platforms are necessary for social discourse, yet they also have to enforce a set of rules and regulations to ensure successful discourse. This means that they wield powers and responsibilities equivalent to those exercised by a state.

Although these rules – at least in Europe – are drafted in close collaboration with regulatory bodies, experts and civil society, their enforcement is dependent on information provided by the tech giants themselves. It is only occasionally that whistleblowers allow us to gain a more profound insight into the way in which some tech companies operate (as was the case with Christopher Wylie and the Cambridge Analytica scandal). So far, we do not have a system of institutionalised, external, long-term monitoring. The coming months and years will show whether the degree of access to be granted to independent experts will be enough.

The problem is that there is not much time left. The elections are due to take place in just over three months. By the end of May, we will have some idea of the success of this attempt to set up a digital regulation regime in Europe.

This article was first published on the German EJO website.

References

Gillespie, Tarleton (2018): Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media: Yale University Press.

Napoli, Philip M. (2018): What If More Speech Is No Longer the Solution? First Amendment Theory Meets Fake News and the Filter Bubble. In: Federal Communications Law Journal 70 (1), S. 55–104.

Image Source: pixabay.com

If you liked this story, you may also be interested in How Tabloids Were Able To Frame The Debate Over Brexit

Sign up for the EJO’s regular monthly newsletter or follow us on Facebook and Twitter.

Print Friendly, PDF & Email

Tags: , , , , , , , , , , , , , , , ,

Send this to a friend