Artificial Intelligence (AI) in newsrooms has a lot of potential for smarter journalism. Yet, as newsrooms increasingly experiment with new technologies, such as machine learning and natural language processing, they also run into practical and ethical challenges. Exploring some of these issues was the motivation behind a recent conference at Columbia University in New York.
When AI fails
Success is built on failed experiments and these are certainly part of the current AI experience. Marc Lavallee, head of the Research and Development team at the New York Times, recalled one recent AI experiment that did not go according to plan.
Speaking on the panel “AI in the Newsroom: Technology and Practical Applications” Lavellee described how his team trained a computer vision programme to recognise members of Congress at the inauguration of President Donald Trump. “For some reason,” Lavallee said, “[the programme] thought all the old white dudes in the audience looked like (U.S. Senator) Al Franken.” In light of such experiences, he added, “We’re approaching this with a healthy dose of scepticism.”
Can the reality of AI live up to the hype?
Other panellists regretted that given the current hype around AI powered technology, the actual applications can’t keep up with these expectations. Sasha Koren, editorial leader of the Guardian’s Mobile Innovation Lab, noted that she found chatbots an “underwhelming experience.” Despite all their promises “that they will chat with you as if they are human,” she said, all they are “really doing is querying a database.”
As AI in the newsroom gains more attention, so does the influence of commercial companies trying to sell tailored products to newsrooms. Meredith Whittaker, who leads the Google Open Source Research group and is a co-founder of AINow, detected a tendency to “naturalize the technology,” so as to make it seem inevitable, when in fact it’s always designed by people. The actual capabilities of these programmes may not be always clear, especially as some developers are unfamiliar with the particular characteristics and standards of journalism.
What’s missing in this conversation, Whittaker said, was the question of whether, and to what extent claims by commercial companies live up to their promises. That’s of concern because these AI developers are salespeople “who don’t give us access to the algorithm, who legally and for a number of good reasons can’t give us access to the data, who assume that our input data matches whatever the data they used to train these algorithms and who are making claims about the efficacy in a field they may or may not understand…”
Artificial Intelligence and ethics
The ethical questions around AI took centre stage at the panel “Exploring the Ethics of AI Powered Products.” Some of the panellists touched on the ethical challenges at the core of AI applications—developing abstract measurements for real life problems. “We have a lot of things that we’d like to measure,” said Jerry Talton of Slack.
Talton mentioned the example of Slack trying to build predictive models that help important pieces rise to the top of online conversations between co-workers. But, he added, as predictive models can only offer correlations, the ethical challenge lies in “figuring out that gap between the things that we can actually predict and what we’re using those things as proxies for.” Implicit is the danger that predictive models give a false security of what piece of information is important.
This sentiment was echoed by Angela Bassa, of iRobot. “Math doesn’t care,” she said, indicating that mathematical models are not biased in any particular way. What makes a difference, however, is how data is being gathered. Bassa pointed out the false allure of clean data. “We’d like to imagine that it gets collected in these hermetically sealed, beautiful ways where you have these researchers in hazmat suits going into the field and collecting. That’s not how it works.”
The limitations of AI
Recognising limitations of AI was a general theme in this panel discussion. Madeleine Elish, a researcher at Columbia University and Data&Society, emphasised that just because AI technology is automating certain tasks, it should not be considered fully autonomous.
“It’s important to realise that right now deployed AI … is automating a task but in a very particularly prescribed domain.” This becomes an ethical question, she added, “when we start to assign too much power to the idea of a software program we forget all the kinds of agencies humans have over the different aspects that go into building these systems.”
Do you have any examples of artificial intelligence in the newsroom? Please share them with the EJO via comments or our Facebook page. https://www.facebook.com/en.EJO.ch/
This article is the second in the EJO series on artificial intelligence in the newsroom. You may also be interested in reading: Smarter Journalism, Artificial Intelligence in the Newsroom
Image: Binary damage code, Markus Spiske, Flickr CC licence
Tags: AI, Artificial Intelligence, Columbia University, digital news, ethics, irobot, Journalism, machine learning, reporting, robots, Slack, Technology