Science News is a leetle late to the game, but that’s the new reality. Researchers have completed some early studies on how fake news gets spread by bots – by acting super fast, and by targeting real people with lots of followers:
Filippo Menczer, an informatics and computer scientist at Indiana University Bloomington, and colleagues analyzed 13.6 million Twitter posts from May 2016 to March 2017. All of these messages linked to articles on sites known to regularly publish false or misleading information. Menczer’s team then used Botometer, a computer program that learned to recognize bots by studying tens of thousands of Twitter accounts, to determine the likelihood that each account in the dataset was a bot.
Unmasking the bots exposed how the automated accounts encourage people to disseminate misinformation. One strategy is to heavily promote a low-credibility article immediately after it’s published, which creates the illusion of popular support and encourages human users to trust and share the post. The researchers found that in the first few seconds after a viral story appeared on Twitter, at least half the accounts sharing that article were likely bots; once a story had been around for at least 10 seconds, most accounts spreading it were maintained by real people.
The bots’ second strategy involves targeting people with many followers, either by mentioning those people specifically or replying to their tweets with posts that include links to low-credibility content. If a single popular account retweets a bot’s story, “it becomes kind of mainstream, and it can get a lot of visibility,” Menczer says.
These findings suggest that shutting down bot accounts could help curb the circulation of low-credibility content. Indeed, in a simulated version of Twitter, Menczer’s team found that weeding out the 10,000 accounts judged most likely to be bots could cut the number of retweets linking to shoddy information by about 70 percent.
Bots have used similar methods in an attempt to manipulate online political discussions beyond the 2016 U.S. election, as seen in another analysis of nearly 4 million Twitter messages posted in the weeks surrounding Catalonia’s bid for independence from Spain in October 2017. In that case, bots bombarded influential human users — both for and against independence — with inflammatory content meant to exacerbate the political divide, researchers report online November 20 in the Proceedings of the National Academy of Sciences.