Russian Trolls And Malware Bots Contributing To Breakdown Of Discussion Regarding Vaccine Policy

Recent research done at George Washington University, located in Washington DC, has found that Russian web brigades, or trolls, along with malware bots could be promoting misinformation and confusion regarding vaccination.

The researchers from GWU didn’t intend to uncover evidence of vaccine misinformation campaigns propagated by Russian trolls, that was just a byproduct of their study. The goal of the recently finished study was to examine how people’s use of social media impacted their discussion about vaccines and to determine if there was a way to improve vaccine education and communication on social media platforms. Yet an analysis of the data showed that communication about US health policies were probably being impacted by outside influences.

Trolls And Bots

After collecting a sample of approximately 1.8 million different Twitter tweets collected from July of 2014 to September of 2017, the scientists tried to account for the influence of trolls and bots that may have created or amplified content intended to make the vaccine discussion more hostile and full of misinformation. The study quantified the influence of a set of tweets that were hashtagged “#VaccinateUS”. The hashtag seems to have originated with the Internet Research Agency (IRA). The IRA, also known as Glasvat, is suspected by United States intelligence agencies and independent investigators to be a Russian company with ties to the current Russian government. It’s also suspected of intervening of the 2016 US election.

The IRA is suspected to have carried out far-reaching trolling operations intended to subvert US democratic processes. While political propaganda has always existed in some form or another, as long as different nations and governments have dealt with each other, the exact tactics used by the Russian trolls are unique to the digital age. Social media platforms can be used to rapidly disseminate misinformation by concentrated trolling targeting certain communities or through the use of bots. Bots and trolls can also fan the flames of hostility, sowing dissent between those with political differences and making civil/productive discussion more difficult.

Beyond the actions of Russian trolls, the problem of misinformation is being driven by another source, bots called “content polluters” who are created to either drive traffic to certain websites/advertise or to spread certain computer viruses. The bot accounts tend to promote anti-vaccination myths approximately 75% more frequently than the average account on Twitter does. An even more vicious anti-vaccination message comes from accounts that are suspected of being bots, though can’t definitively be categorized as such.

Research suggests that the blame for the breakdown of civility and the spread of misinformation can’t be put solely on bots and trolls, it’s a larger societal issue. Nonetheless, despite how complex the issues of growing partisanship and fake news are, the evidence does seem to imply that foreign trolls and bots have played some part in the widening social divide.

Health Consequences

The question for the researchers, after finding the data implicating Russian trolls and content polluting bots, was how harmful the techniques would be to society.

Computer Scientist Mark Dredze, from Johns Hopkins University in Maryland, explains the danger of vaccine misinformation campaigns:

“By playing both sides, they erode public trust in vaccination, exposing us all to the risk of infectious diseases. Viruses don’t respect national boundaries.”

Herd immunity is an important concept for public health because small drops in vaccination rates can cause a disease to rapidly spread through a community that would otherwise be protected. In Europe, falling vaccination rates over the past few years have led to a measles outbreak, with more than 41,000 people being infected with measles within the first half of this year. The outbreak was responsible for 37 deaths.

David Broniatowski, the lead author on the study, explained that while it is basically impossible to determine exactly how many messages were posted online by trolls and bots, a “significant portion” of the messages related to vaccines could have been created by “malicious actors with a range of hidden agendas”.

Motivations

What could those hidden agendas be? It may not be that those who sow dissent or misinformation necessarily want outbreaks of disease to occur, rather the goal could be to have a new way to destabilize trust in scientific authorities, democratic processes, and government bodies. Looking at the sort of language used by the trolls and the bots revealed something interesting – while the conversations generally mirrored the debate over vaccine policy, there was a much larger use of political rhetoric, such as “constitutional rights”, “parental rights” and “freedom.” The posts created by groups tied to the Russian government posted approximately equal amounts of pro-vaccine and anti-vaccine content, yet the content was consistently aggressive and polarizing.

Photo: LoboStudioHamburg via Pixabay, CC0

When it comes to the use of content polluting bots, those who create them may just be hoping to spread malware or to drive traffic to certain links, instead of possessing an explicitly anti-vaccine agenda. Malware distributors and traffic drivers may just be releasing messages they hope will get people passionate yet credulous enough to check out suspicious links, and anti-vaxxers may fulfill these requirements.

Even if the goal of bad actors isn’t to create further outbreaks, and it’s simply to create larger rifts between groups of people with different viewpoints, possible outcomes are still quite dangerous.

The tactics of bad actors who seek to sow dissent exploit things like people’s tribal nature, and so efforts to combat the problem and create more civil dialogue may have to find ways around our psychological shortcomings.

Tech companies and social media companies are placing some of their hopes on artificial intelligence to combat misinformation and hate speech online. Machine learning algorithms can be trained to recognize patterns of behavior and potentially harmful content and automatically flag the posts for review by moderators. Such techniques are currently being tested in Myanmar by Facebook, where the platform has been used to spur on ethnic violence against the Rohingya people. Meanwhile, activist groups are calling for algorithmic transparency in the AI systems used to flag posts and crack down on the spread of misinformation or hate speech, fearing that the technology could be misused or unfairly target certain groups over others.

In addition to the use of AI to stem the tide of misinformation and polarization, Facebook has also pledged to hire many more content review employees, with Mark Zuckerberg stating that while he believes that AI systems have a role to play in catching misinformation and hate speech, it will be five to ten years before the system can adequately recognize nuances in posts like context, half-truths, and sarcasm.