It’s no secret that mis/disinformation, fueled by Big Tech algorithms that prioritise virality, is pervasive online. Journalists have long been exposing the lies that circulate online and the people behind them. Now some start-ups are hoping their Artificial Intelligence (AI) can help journalism organisations do their job. This was one of the discussion points in the recent AI Startups and the Fight Against Mis/Disinformation: An Update report, which was published by The German Marshall Fund for the US. The study includes interviews with some of the organisations trying to work with newsrooms.
The idea is simple: AI can help humans identify mis/disinformation online by instantaneously fact-checking news and other circulating messages and authenticating content to improve provenance – which is our ability to identify the nature, origin, and purpose of information. The hope is that this will bolster consumer trust in news.
Media trust issues
Making progress in each of these three areas is critical, given that the 2022 Edelman Trust Barometer found that 76 percent of us worry about “false information or fake news being used as a weapon,” and nearly half (46 percent) view media as a “divisive force in society”. Notably, trust levels are higher in countries with respected public broadcasters, including Sweden, Denmark, and the UK.
These concerns about trust in media prompted The Journalism Trust Initiative (JTI) and NewsGuard to develop rating systems so audiences and advertisers would know which outlets are reliable. JTI, led by Reporters Without Borders, develops and implements “indicators for trustworthiness in journalism”. NewsGuard, founded, led, and operated by professional journalists, created a news “rating” – Green (“generally trustworthy”) or Red (“generally untrustworthy”) – based on nine criteria to produce a “Nutrition Label” for news sites.
Vett News, supported by the Knight Foundation, uses technology to establish a feedback loop between news consumers and news producers, making it easy to flag and correct typos, factual errors, and bias or include additional context. The Factual uses an AI-enabled news platform to identify unbiased news for consumers interested in trustworthy sources.
The AI solution
“Think of AI as a complement to human thinking, not a replacement,” Arjun Moorthy, co-founder, and CEO at The Factual, suggested in an interview for the German Marshall Fund report. “While computers can tell if a specific fact is true or false, tying facts together and understanding the news requires tremendous context and history, which is where humans excel. Hence AI can help identify all the facts and make it easier for humans to reach their own conclusions.”
In an interview about the process of fact-checking, community and impact manager at The Poynter Institute’s International Fact-Checking Network (IFCN), Enock Nyariki, discussed the role of AI in fact-checking. He explained that the process of fact-checking can take anywhere from two hours to many weeks. It involves claim identification, research, writing, and editing before publishing. Automated fact-checking, by contrast, happens in real-time. It relies on technology to compare misleading claims against published fact checks.
“IFCN was launched in 2015 to bring together a growing community of fact-checkers around the world and to promote excellence in the fight against false information,” said Nyariki. In the last seven years, IFCN has grown from about 30 to more than 100 organizations, according to Poynter.
Since 2016, the group has received funding and information from Big Tech to provide fact-checking services. Under Meta’s Third-Party Fact-Checking Program, fact-checkers working with IFCN assess the veracity of claims made on Facebook and Instagram and attach fact-checks to posts with false information, Nyariki explained.
For example, IFCN convened fact-checkers from ten different organisations in the United States during the 2020 presidential election to create a shared database of published fact checks. The database could be queried during high-stakes events, such as presidential debates, to enable fact-checking in real-time.
Still, this kind of automated fact-checking is in its early stages and continues to be shaped by academic institutions, like the Reporters’ Lab at the Sanford School of Public Policy at Duke University, and up-and-coming start-ups, such as the London-based Full Fact. Full Fact deploys three automated tools that detect and categorise claims, cross reference them against existing fact checks, and aid fact-checkers by providing relevant statistical figures. Ultimately, this combination of tools could help make instantaneous fact-checking a reality, as described in a recent Poynter article.
But the prevalence and range of misleading multimedia content complicates the problem. In an article for the Columbia Journalism Review in 2018, Nicholas Diakopoulos asserted that: “…visual evidence may largely lose its teeth as strategic misinformers use the specter of the technology to undermine any true (verification)”. He asked: “So what happens when the public can no longer trust any media they encounter online?”
Content provenance is another attempt to help with verification. Traditional news organisations and large technology companies have collaborated to set new authenticity standards through the Content Authenticity Initiative (CAI), which is a formal coalition of Adobe, Arm, BBC, Intel, Microsoft, and Truepic – a photo and video authentication technology that allows businesses, non-profit organizations, nongovernmental organizations, and citizen journalists around the world to verify their photos and videos.
Mounir Ibrahim, vice president of public affairs and impact at Truepic, told Columbia University graduate students working on the German Marshall Fund report that Covid-19 has accelerated the need for verification as so much of society is now digitized.
It’s clear that the information ecosystem is permanently marred by the spread of mis/disinformation – unrelenting in the wake of COVID-19, the 2020 United States presidential election, the insurrection at the United States Capitol, and Russia’s invasion of Ukraine. AI start-ups on their own won’t be enough, but they have a role to play in helping newsrooms.
Read the full ‘AI Startups and the Fight Against Mis/Disinformation: An Update’ report here.
Opinions expressed on this website are those of the authors alone and do not necessarily reflect or represent the views, policies or positions of the EJO or the organisations with which they are affiliated.
If you liked this story, you may also be interested in: How cash deals between big tech and Australian news outlets are inspiring new laws