Washington Post checks the results from a NYU researcher’s experiment using bots to prevent racial harassment on Twitter – and what automated anti-racism says about how we can be guided online:
In the short run, heavy-handed sanctions like account bans can actually embolden users who are censored. There is excellent evidence that this happens in China when the regime employs censorship.
A better option might be to empower users to improve their online communities through peer-to-peer sanctioning.
…
The use of an experiment allowed me to tightly control the context for sanctioning. I sent every harasser the same message:
@[subject] Hey man, just remember that there are real people who are hurt when you harass them with that kind of language
I used a racial slur as the search term because I thought of it as the strongest evidence that a tweet might contain racist harassment.
…
Overall, I had four types of bots: High Follower/White; Low Follower/White; High Follower/Black; and Low Follower/Black. My prediction was that messages from the different types of bots would function differently. I thought High Follower/White bots would have the largest effect, while Low Follower/Black bots would have only a minimal effect.
I expected the white bots to be more effective than the black bots because all of my subjects were themselves white, and there is evidence that messages about social norms from the “in-group” are more effective than messages from the “out-group.”
…
Only one of the four types of bots caused a significant reduction in the subjects’ rate of tweeting slurs: the white bots with 500 followers. The graph below shows that this type of bot caused each subject to tweet the slur 0.3 fewer times per day in the week after being sanctioned.