Nature takes time out from COVID-19 news to inveigh against another kind of epidemic – of online viral content that’s, shall we say, too often less than accurate:
In times of uncertainty, the vicious cycle is more potent than ever. Scientific debates that are typically confined to a small community of experts become fodder for mountebanks of all kinds.
Because ‘COVID-19’ and ‘coronavirus’ are unique keywords, they are easily exploited by scammers hiding among the 120,000 domains related to the outbreak. Attempting to dupe unwitting information seekers, scammers pair COVID-19 and coronavirus with terms such as ‘masks’, ‘loan’, ‘unemployment’, ‘trial’, ‘vaccine’ and ‘cure’. Some domain companies are restricting the use of these keywords to prevent fraud, and tech companies are coordinating takedowns. But that’s not nearly enough.
After years of pressure, and even congressional hearings, tech companies are taking action against misinformation because the consequences of their doing nothing have become more obvious.
…
A month ago, my team began monitoring rumours about a potential ‘coronavirus cure’ circulating among tech investors on Twitter. It quickly gained traction after technology entrepreneur Elon Musk shared a Google doc purporting to be a scientific paper from an adviser to Stanford University’s School of Medicine in California. The next night, on a popular right-wing broadcast on Fox News, host Tucker Carlson featured the author of the Google doc, who claimed that hydroxychloroquine has a ‘100% cure rate’ against COVID-19 on the basis of a small study in France. Moments after, online searches for ‘quinine’, ‘tonic water’, ‘malaria drug’ and the like surged. If there was a product to be had, it would have sold out. Clearly, the public was listening.
In the wake of the Fox News broadcast, Stanford clarified that the author was not an adviser and the school was not involved. Efforts by doctors and scientists to use their personal social media to counter this dangerous speculation paled against the tweet storm.
…
Tech companies acknowledge that groups, such as The Internet Research Agency and Cambridge Analytica, used their platforms for large-scale operations to influence elections within and across borders. At the same time, these companies have balked at removing misinformation, which they say is too difficult to identify reliably.