An analysis of information shared on Twitter during the 2016 U.S. presidential election has found that automated accounts— or “bots”—played a disproportionate role in spreading misinformation online.
The study, conducted by Indiana University researchers and published Nov. 20 in the journal Nature Communications, analyzed 14 million messages and 400,000 articles shared on Twitter between May 2016 and March 2017, a period that spans the end of the 2016 presidential primaries and the presidential inauguration on Jan. 20, 2017.
A mere 6 percent of Twitter accounts that the study identified as bots were enough to spread 31 percent of the “low-credibility” information on the network. These accounts were also responsible for 34 percent of all articles shared from “low-credibility” sources.
The study also found that bots played a major role promoting low-credibility content in the first few moments before a story goes viral.
“This study finds that bots significantly contribute to the spread of misinformation online—as well as shows how quickly these messages can spread,”
“People tend to put greater trust in messages that appear to originate from many people,”
. “Bots prey upon this trust by making messages seem so popular that real people are tricked into spreading their messages for them.”
Information sources labeled as low-credibility —-such as websites with misleading names like “USAToday.com.co”—include outlets with both right- and left-leaning points of view.
The researchers also identified other tactics for spreading misinformation with Twitter bots. These included amplifying a single tweet—potentially controlled by a human operator—across hundreds of automated retweets; repeating links in recurring posts; and targeting highly influential accounts.
For instance, the study cites a case in which a single account mentioned @realDonaldTrump in 19 separate messages about millions of illegal immigrants casting votes in the presidential election—a false claim that was also a major administration talking point.
“This experiment suggests that the elimination of bots from social networks would significantly reduce the amount of misinformation on these networks,”
The study also suggests steps companies could take to slow misinformation spread on their networks. These include improving algorithms to automatically detect bots and requiring a “human in the loop” to reduce automated messages in the system.
platforms such as Snapchat and WhatsApp may struggle to control misinformation on their networks because their use of encryption complicates the ability to study how their users share information.
“As people across the globe increasingly turn to social networks as their primary source of news and information, the fight against misinformation requires a grounded assessment of the relative impact of the different ways in which it spreads.