You may have heard media reports linking Russian trolls to the vaccine safety debate over the summer. These reports arose following the release of a paper by David A Broniatowski and colleagues. The paper, titled “Weaponized Health Communication: Twitter Bots and Russian Trolls Amplify the Vaccine Debate,” was published online ahead of print in the American Journal of Public Health.
The researchers reviewed about 1.8 million tweets posted between July 2014 and September 2017 to determine the impact of Twitter bots, which are automated promotional accounts, and trolls, which are individuals who misrepresent themselves to “promote discord.”
The goals were to:
- Determine the relative rate of vaccine-related tweeting by bots and trolls compared with average Twitter users
- Assess whether the tweets were pro-vaccine, anti-vaccine or neutral
- Describe a vaccine-related hashtag used by Russian trolls
The primary findings included:
- Twitter accounts identified as Russian trolls by the U.S. Congress were more likely to tweet about vaccine-preventable diseases but not necessarily about vaccines.
- Content polluters posted significantly more anti-vaccine rhetoric; in contrast, trolls and sophisticated bots posted similar amounts of pro- and anti-vaccine content.
- The hashtag #VaccinateUS varied in the way it was used compared with most other vaccine-related hashtags. It was not clearly pro- or anti-vaccine in messaging. The posts were characterized by grammatical errors, unnatural word choices and phrasing; tended to promote conspiracy theories; were tied to U.S. politics; and invoked divisive topics in an effort to target socioeconomic tensions that exist in the U.S.
The authors conclude, “Therefore, beyond attempting to prevent bots from spreading messages over social media, public health practitioners should focus on combating messages themselves while not feeding the trolls.”