New Research Examines Algorithms vs Editors
Algorithms can be better than human editors at predicting what news stories people will read online—but the advantage has distinct limits and can lead readers to consume a less diverse mix of news.
Those are two key findings of new research by a team studying how use of artificial intelligence (AI) impacts consumption of online news.
The main takeaway from the work is that humans and algorithms can complement one another, said Ananya Sen, a postdoctoral student at MIT’s Initiative on the Digital Economy, who conducted the study with Joerg Claussen of the Munich School of Management, and Christian Peukert of the Catolica-Lisbon School of Business and Economics. Human editors, for instance, consistently do a better job than algorithms when there’s an unexpected breaking news story, since there’s limited data on breaking news that can help guide the machines.
“Data can give [a news organization] some level of strategic advantage,” said Sen, “but it doesn’t seem to be a runaway hit that will give you dominance in the market.”
In a new paper, The Editor vs. the Algorithm: Targeting, Data and Externalities in Online News, published in June, the researchers laid out how they tested their ideas at an unnamed German news organization which receives more than 20 million unique visitors to its website each month. “It is important to note that it is rare for major legacy news outlets in the world to experiment with algorithmic curation of their homepage,” the researchers wrote in the paper. It is a new field of research.
The researchers separated a control group of readers—which saw the organization’s website as assembled by human editors—and a “treatment group,” which saw a version in which one of the four slots on the page was personalized using an algorithm that studied the reader’s preferences.
Diminishing Economic Returns
At first, when there’s limited data on where readers click, the humans outperform the algorithm—meaning they do a better job of picking stories that readers are more likely to click on to read. But this shifts as the number of visits grows and the algorithm gains insight on what people prefer to read. Starting at around 10 visits, the algorithm and the editor perform roughly equally. But as data piles up, the algorithm pulls ahead. Nonetheless, there’s a limit to the gains. The researchers found that after about 50 website visits, there’s a decreasing economic return from this growing trove of data. (See Figure 1).
“Overall, the algorithm outperforms the human editor when it has access to sufficient data though in the early stages, the human is better at predicting the average taste of readers,” the researchers note. “Therefore, the optimal strategy for a news outlet seems to be to employ a combination of the algorithm and the human to maximize user engagement.”
The findings have implications for anti-trust and market power issues associated with online platforms. The fact that there are diminishing economic and revenue returns from access to individual user data suggests there are limited strategic and financial advantages for firms that gather it. In addition, if privacy concerns lead to limits on how much of this data firms can gather and hold, the results suggest such limits shouldn’t hamper the performance of algorithms.
The study also looked at the potential downside of predicting reader preferences using AI. “The news is different from a standard product because of its public-good nature,” the authors wrote. “In particular, the algorithm is trained on prior individual-level data, which is ‘biased’ toward personal preferences and could be at odds with ‘socially optimal’ reading behavior.”
The researchers discovered that readers who had articles selected by the algorithm were more likely than those in the control group to read articles similar to those recommended. In other words, the algorithm tended to reinforce the “bubble” around readers, as they opted to read a narrower set of topics than they might have otherwise. The researchers also sought to assess the characteristics of readers who are prone to “go down the rabbit hole” and reduce the variety of stories they read.
One example of this trend can be seen in voting patterns. The researchers discovered that readers who lived in German states where there was a high share of votes for extreme political parties—both left and right wing—in the last election were more likely to increase their consumption of political stories when their stories were selected by the algorithm. In addition, readers in regions with higher voter turnout—a proxy for being more informed—were less likely to increase their share of political news.
Sen said these findings show that there’s a limit to the power of algorithms. “When you use recommendation algorithms you don’t see the 40% or 50% rise in engagement that some researchers have hypothesized,” he said, which means the best tools going forward will remain a mix of human and machine.
Timothy Aeppel is a Research Affiliate at the MIT Initiative on the Digital Economy and a reporter for Reuters.