Automated Insights, a American technology company, recently announced that it is producing and publishing 3,000 earnings report articles per quarter for the Associated Press, all automatically generated from data. Narrative Science, another US software company, generates stories the same way, in finance, sports, web analytics and other domains.
Anywhere there is clean and well-structured data an algorithm can now write straight news stories that in some cases are indistinguishable from human-written ones. Often referred to as “robot journalism”, such technology offers new opportunities for cheaply creating content on a massive scale, personalizing that content to individuals, or just covering events more quickly than a human ever could.
There are obvious economic benefits to robot journalism, but aside from writing a pile of straight news articles in finance or sports could they one day serve higher-order public interest journalism? For instance, could such robot journalists bring or enhance a critical mass of attention and public pressure to important civic issues? How are such technologies going to change the public media sphere that we inhabit?
Together with Tanya Lokot at the University of Maryland, I have been researching and examining the roles and functions of robot journalists, in particular on social media.
There is often a focus on the negative uses of bots on social media, for which there are many, including social shaping, content pollution, social metric gaming, political astroturfing and others. For example, there have been media reports of such bots manipulating trending topics in the Mexican Elections, or drowning out dissent in Russia.
Can “news bots” do public interest journalism?
But we are more interested in the ways that such news bots act as positive purveyors of news and information on a platform like Twitter. Can they do public interest journalism?
In our studies we have examined hundreds of news bots that we’ve found on Twitter. There is no doubt a healthy ecosystem of automata sharing news and information, aggregating bits of this or that, and creating niche channels that may only appeal to a very narrow audience. A bot like @BadBluePrep, for instance, aggregates news related only to survival and preparedness. The @North_GA news bot pulls from and aggregates news feeds oriented towards the Northern area of the U.S. state of Georgia. The possibilities for serving micro-audiences of niche interest or hyperlocality at low costs are readily apparent.
We have also found some bots that are pushing into higher-order journalistic functions like accountability or critical commentary. For example, the @cybercyber bot critiques the overuse of the term “cyber” in news stories — drawing attention to that peculiar word choice.
The @NYTAnon bot detects and disseminates instances in New York Times articles where anonymous sources have been used. The NYTAnon bot is particularly interesting because it did stir up some debates and caveats amongst the journalists on Twitter, not to mention entering into the New York Times own discourse on the Public Editor’s blog as it grapples with the issue of anonymous sources. I wouldn’t yet call it “accountability” but this bot does seem to be contributing in a meaningful way to media criticism, and provoking responses.
Most Bots do not reveal their information sources
Whether bots can do accountability journalism in a wide variety of domains still remains to be seen though. Our analysis also showed that only about 45% of the bots we examined provided the information sources used by the bot. Thus the transparency of these bots emerges as a potentially important issue to their broader employment.
Nearly 10% of Twitter accounts are automated
How should the ideal of journalistic transparency manifest for such bots, and how should news consumers adapt their understanding of credibility and trustworthiness with respect to automation? Moreover, given that we can only expect more bots to be employed for critique and commentary, the issue of platform ownership comes into play. Twitter estimates that about 8.5% of the accounts on the platform are automated in some way, and it routinely scrubs out flagrant spam bots.
But the same terms of use that Twitter uses to expunge those spam accounts might also be brought to bear on a more noble news bot. What will happen when a corporate entity decides to shut down a news bot that it disagrees with?
pic credit: Alberto D’Ottavi / Fickr Cc
Tags: Computational journalism, Digital Media, digital news, Journalism research, journalists, media, Media Accountability, Media economics, Media ethics, Media research, New York Times, news bots, Online journalism, Robot journalism, Social media, social networks, Twitter