Game Designed To Innoculate People Against Fake News Help Increase Skepticism

A recent study done at the University of Cambridge showed that having people play a simple game which showed them the tricks used to disseminate fake news increased their resistance to, or skepticism about, fake news.

Disinformation, or “fake news”, continues to be a problem in the era of social media and online news, and researchers, psychologists, and engineers are looking for ways to combat the spread of disinformation. Fake news isn’t just a concern for social scientists and politicians either, a recent study done by the Pew Research Center found that in general Americans think the proliferation of fake news is a bigger problem than racism, violent crime, and terrorism. The exact danger that fake news poses to society may be up for debate, but most people seem to agree that it’s a problem.

While many solutions to disinformation/fake news have focused on tweaking the algorithms used by media websites to better track or suppress fake news content, another approach is to equip people with critical/skeptical thinking tools so they can better spot disinformation.

Teaching Skepticism With Games

The University of Cambridge study took around 15000 participants and had them play a game in a web browser, entitle Bad News. The game was structured so that the participants could sow (simulated) disinformation themselves using a variety of techniques. The game took about 15 minutes to complete and players were tasked with manipulating content on social media and in news feeds by using tactics like: using photoshop to create fake photo evidence, creating conspiracy theories and using them to attract followers, using bots on Twitter, etc. The participants kept track of a “credibility score” while they did this, which referenced how persuasive their disinformation was.

Photo: (https://pixabay.com/illustrations/question-mark-important-sign-1872665/) by qimono via Pixabay, Pixabay License (https://pixabay.com/service/license/)

Dr. Sander van der Linden, Director at the Cambridge Social Decision Making Lab explains that the goal was to determine if it was possible to preemptively debunk fake news by giving people a better understanding of the techniques used to create disinformation.

“Research suggests that fake news spreads faster and deeper than the truth, so combating disinformation after-the-fact can be like fighting a losing battle… We wanted to see if we could pre-emptively debunk, or ‘pre-bunk’, fake news by exposing people to a weak dose of the methods used to create and spread disinformation”, Van der Linden explained to the news department at Cambridge.

Dr. van der Linden explained that of this technique is what psychologists referred to as “inoculation theory”, referencing the fact that exposing people to some of the ways disinformation is disseminated functions as a “psychological vaccination”.

Recognizing Disinformation Tactics

In order to understand how the game impacted its players, players were given a series of headlines both before and after they played the game, and then were instructed to give these headlines a reliability rating. The headlines that the researchers provided the participants with were a mixture of real and fake news headlines. According to the results of the study, the misinformation inoculations had a measurable effect. The credibility off the news headlines was reduced by approximately 21% after the participants played the game, compared to their assessment of headlines before they played the game. No reduction in credibility was observed for real news headlines. Another promising finding of the study was that the participants who were most credulous when it came to fake news saw the greatest benefit from playing Bad News.

Although 15 minutes of gameplay only had a moderate effect on an individual, when this solution is scaled up across thousands of people around the world, it has a serious potential to build resistance against disinformation and fake news.

As ScienceDaily reported, the Bad News game gives players the opportunity to earn six different badges, each of which represents a typical strategy used by disseminators of fake news and misinformation. These badges are: conspiracy, discrediting sources, trolling, polarisation, emotionally provocative content, and impersonation.

Four of the fake news badges that participants could earn had in-game questions measuring the effects of the strategies. For example, the “impersonation” section saw an improvement of 24% in terms of distinguishing between fake tweets or headlines by accounts posing as trusted individuals on the various social media platforms. As far as some of the other strategies go, participants saw a reduction in the perceived credibility of headlines designed to be deliberately polarizing by approximately 10%, and for discrediting tactics an improvement of 19% was witnessed. Discrediting refers to trying to undermine legitimate news sources with false accusations of bias. Finally, the “conspiracy” section refers to the creation of disingenuous narratives that try to pin world events on secretive, powerful groups, and for this category perceived credibility was reduced by 20%.

Another author on the study, Jon Roozenbeekm, also from Cambridge, explained that the study represents a shift from the general tactics used to combat fake news. Instead of attacking the ideas themselves, and trying to debunk individual pieces of fake news, the researchers are trying to create a general psychological vaccine against misinformation. It is hoped that this tactic will prove much more scalable than trying to counter every specific piece of misinformation, falsehood, or conspiracy.

The Bad News program was designed by van der Linden and Roozenbeek, with assistance from the Dutch media collective DROG and Gusmanson, a design agency. Currently, the team is in the process of creating new versions of their game for use in different languages, currently supporting Serbian, Polish, Greek, and German. The messaging platform WhatsApp has recently contacted the researchers to create a new version of the game for them as well.

Photo: (https://pixabay.com/illustrations/social-media-media-board-networking-1989152/) by geralt via Pixabay, Pixabay License (https://pixabay.com/service/license/)

The research team is also hoping to give young children skeptical tools, much as they do adults, and have created a junior version of the program that is currently available in 10 different languages. Roozenbeek explained that it was important to provide people with media literacy at a relatively early age and that they want to carry this project out to see how long the effects last.

“Our platform offers early evidence of a way to start building blanket protection against deception, by training people to be more attuned to the techniques that underpin most fake news,” explained Roozenbeek.

While the results of the study sound promising, there are limitations of the study, as with any study. To begin with, the sample of players was self-selecting, as it relied on those who found the game online and chose to play it. For this reason, the initial testing cohort skewed towards a more educated, younger, liberal, and male demographic. Despite this, when examining the trends in the data, it was found that the game was approximately as effective at inoculating people against fake news despite variables like age, political persuasion, education, or gender. The research team wanted to control for ideological effects, and to accomplish this they gave the players the choice to create fake news across both the right and left halves of the political landscape.

Using AI Algorithms To Detect Fake News

Photo: (https://pixabay.com/illustrations/brain-circuit-intelligence-1845944/) via Pixabay, Pixabay License (https://pixabay.com/illustrations/brain-circuit-intelligence-1845944/)

While the University of Cambridge team is focused on giving people the tools to think skeptically about misinformation, other researchers are using artificial intelligence to create tools that can detect fake news better than humans can. A team of researchers from the University of Michigan recently created an algorithm capable of detecting fake news, and this algorithm performed better than human analysts trying to detect disinformation.

The AI algorithm was capable of detecting fake news with a 76% accuracy, versus a 70% accuracy by human participants. The way the algorithm achieved such accuracy was by focusing on words commonly used in fake news reports, as well as by studying grammatical structure and punctuation. As one example of how fake news contains similar word choice and patterns, fake news reports are often filled with hyperbole and exaggerate the content by using emotionally gripping words like “extraordinary”, “incredible”, or “overwhelming”.

The research conducted by the University of Michigan researchers was an offshoot of their previous research on lying in general, not necessarily the spread of fake news. However, there are many commonalities between the two forms of research, and the researchers were able to use their findings regarding deception and apply it to their fake news algorithm.

Rada Mihalcea is an engineering professor and computer scientist at the University of Michigan and Mihalcea explained to Singularity Hub that the reason telltale signs like emotional language and short sentences are indicative of fake news is due to attempts to make up for the false nature of the report.

“I think that’s a way to make up for the fact that the news is not quite true, so trying to compensate with the language that’s being used,” explained Mihalcea. “Deception is a complicated and complex phenomenon that requires brain power. That often results in simpler language, where you have shorter sentences or shorter documents.”