Scientific American broadcasts a study (by David Rand and Gordon Pennycook, published in Nature) that looked at why misinformation spreads online, and found that it isn’t because people are trying to spread pleasing lies, or even because some people don’t care what’s true – it’s just the way being online works on human brains:
Our research finds that most people do not wish to share inaccurate information (in fact, over 80 percent of respondents felt that it’s very important to only share accurate content online) and that, in many cases, people are fairly good (overall) at distinguishing legitimate news from false and misleading (hyperpartisan) news. Research we’ve conducted consistently shows that it’s not partisan motivations that lead people to fail to distinguish between true and false news content, but rather simple old lazy thinking. People fall for fake news when they rely on their intuitions and emotions, and therefore don’t think enough about what they are reading—a problem that is likely exacerbated on social media, where people scroll quickly, are distracted by a deluge of information, and encounter news mixed in with emotionally engaging baby photos, cat videos and the like.
This means that when thinking about the rise of misinformation online, the issue is not so much a shift in people’s attitudes about truth, but rather a more subtle shift in attention to truth. There’s a big disconnect between what people believe and what they share. For example, in one study, some participants were asked if they would share various headlines, while other participants were asked to judge the headlines’ accuracy. Among the false headlines, we found that 50 percent more were shared than were rated as accurate. The question, then, is why.
To state the obvious: Social media platforms are social.
…
We conducted a large field experiment on Twitter where we sent a simple accuracy prompt to over 5,000 users who had recently shared links from Breitbart or Infowars. Our intervention did not provide novel information, nor did it prescriptively ask people to be more accurate or be vigilant about fake news. Instead, we simply asked them for their opinion about the accuracy of a single nonpolitical news headline. We didn’t expect them to actually respond to our question; our goal was to remind people about the concept of accuracy (which, again, the vast majority of people believe to be important) by simply asking about it .
We found that being asked the single accuracy question improved the average quality of news sources the users subsequently shared on Twitter.
…
Accuracy prompts are certainly not going to solve the whole misinformation problem. But they represent a novel tool that platforms can leverage to get ahead of misinformation, instead of only playing catch-up by fact-checking falsehoods after they’ve been shared or censoring once things get out of hand.