Nature has a fascinating piece of research (with great graphics, so please click through) on how exactly public opinions can be molded by a few strategically placed bots on a social media platform:
When social networks become primary conduits of information, the pattern of network connections influences what voters believe about others’ voting intentions. This influence matters, because people shift their own perspectives and voting strategies in response, either through behavioural spread known as social contagion or on the basis of strategic considerations.
Filter bubbles reinforce political views, or even make them more extreme, and drive political polarization. Stewart and colleagues now describe a related, but distinct, way in which social-network structure can affect voting behaviour.
The authors examined situations in which two groups of individuals struggle over a contentious decision, under the spectre of gridlock. They developed a model of voter choice based on game theory — a theoretical framework for analysing strategic behaviour. They tested this model with 2,520 real people playing an online game in groups of 12. The model and the experiment shared the same rules: each individual had a preferred outcome, but all individuals preferred consensus, even on the less favoured outcome, to inaction.
Such scenarios are common. For example, in the case of the US government budget process, failure to pass a budget results in a harmful government shutdown. To avoid gridlock, it might make sense to vote against one’s preferred option, particularly as the threat of gridlock increases and the chance of winning declines. Therefore, avoiding gridlock requires information about how others will vote.
In a ‘fair’ network, most people receive an accurate picture through their contacts about how others will vote. However, Stewart et al. discovered that, even without changing the number of connections that each individual has, networks can be rewired in ways that lead some individuals to reach misleading conclusions about community preferences. Ultimately, these misperceptions can even sway the course of an election. In this process, which the authors dub information gerrymandering, a network is arranged such that the members of one group waste their influence on like-minded individuals.
Indeed, the authors find evidence of information gerrymandering in the voting patterns of US and European Union congressional bodies, as well as in data from the US federal elections.
The implications of Stewart and colleagues’ work are alarming. In the past, information was disseminated by a small number of official sources such as newspapers and television stations, or through real-world social networks that emerged largely from distributed processes involving individual interpersonal dynamics. This is no longer the case, because social-network websites deploy technologies that restructure social connections by design. These online social networks are highly dynamic systems that change as a result of numerous feedbacks between people and machines. Algorithms suggest connections; people respond; and the algorithms adapt to the responses. Together, these interactions and processes alter what information people see and how they view the world. In addition, micro-targeted political advertising offers a surreptitious and potent tool for information gerrymandering. Alternatively, information gerrymandering might arise without conscious intent, but simply as an unintended consequence of machine-learning algorithms that are trained to optimize user experience.
Let me repeat that last sentence: “Alternatively, information gerrymandering might arise without conscious intent, but simply as an unintended consequence of machine-learning algorithms that are trained to optimize user experience.”