Avoid Getting Trapped in a Social Media Misinformation Rabbit Hole
The phrase “fake news” is everywhere lately, and fact check flags are popping up on social media posts all over. At times it seems social media misinformation spreads faster than a resilient patch of dandelions. In these times, current events quickly become fodder for online partisan pot shots.
“Fake news!” “Do your research!” “Don’t be a sheeple.”
I’m simplifying here, but when our brain is exhausted, it tries to help us out by using shortcuts – or heuristics – to process all that information quickly. In other words, your brain is going to use information from your previous experiences to quickly decide what the new information means and whether it’s important to you. It’s a neat feature. But when it comes to misinformation on social media, your brain’s resourcefulness and quick thinking can actually set you up to be sucked into the misinformation rabbit hole.
Memes and Misinformation: What’s the Connection?
Not so long ago, the average person thought of internet memes as silly little bits of the internet that wouldn’t hurt a fly, let alone have societal-level effects. Now, however, it’s clear that memes are often part of social media misinformation and disinformation campaigns online – a form of psychological warfare. Joan Donovan, a disinformation researcher at Harvard, calls this the “meme wars.”
For instance, we know that during the 2016 election cycle, Russian-based groups used Facebook ad tools to spread memes targeting both sides of the aisle, presumably to sow discord or foster polarization among the U.S. electorate. While Facebook instituted new rules around political advertising for the 2020 election, memes’ influence in politics isn’t going away any time soon.
Message Processing and Misinformation
So how can memes influence people?
People tend to interpret new information, especially when it’s complicated or ambiguous, in light of pre-existing beliefs. Many times, memes require a certain amount of “decoding” of the message – there’s usually some kind of imagery and some text that work together to make the meme’s full argument. Whenever we have to fill in the conclusion of the argument – a rhetorical device called an enthymeme – that creates a situation where we need to rely on our previous knowledge to fill in the gaps and “decode” what the meme is saying.
Back in 2016, I conducted an experiment with political internet memes. I found that if two people with differing political ideologies see the same political meme, the person whose political viewpoints match that of the meme will believe it to be more effective and will be less critical of the meme than the person whose views don’t match up with the meme’s. In other words, the meme “pings” certain heuristics in the brain that tell the brain whether to be more critical or accept the information as is. This kind of partisan reasoning is part of what contributes to polarization.
At the same time, partisan content and misinformation tend to generate high engagement on social media – reactions, comments – and social media algorithms are based around engagement. This helps boost posts into users’ feeds as their connections engage with the content. There are a variety of reasons people share memes, but emotion and social identity are a couple of factors. Additionally, one recent study suggests people who are both predisposed to believe in conspiracy theories and to angered reactions may be most susceptible to accepting misinformation shared via memes.
Social Media Misinformation Rabbit Holes
Speaking of algorithms, our social media feeds are tailored to us. What you have engaged with in the past informs the algorithm regarding what to show you in the future. The Citizen Browser project demonstrates just how different Facebook feeds are for folks on different ends of the political spectrum. In many cases, it’s almost as if those folks live in two different worlds with no common ground. The emphasis on engagement in social media algorithms in recommending content to view next can be problematic for surfacing misinformation. YouTube and its “recommended next” algorithm has particularly struggled with this.
Finally, our social media feeds are full of people we know. People tend to be more influenced by something when they hear about it from someone they know. It’s why word of mouth is still the most powerful advertising there is – and why the neighborhood messaging app Nextdoor has become a vector for misinformation about the pandemic.
SIFT Your Way Out of Misinformation on Social Media
So, what can you do to be a savvier consumer of social media content? How can you avoid getting sucked into a social media misinformation rabbit hole? Michael Caulfield believes that, rather than be sucked into following link after link and wasting cognitive resources, internet users can make quick judgements about whether a piece of content is worth spending more time with. He suggests the acronym SIFT to assess potential misinformation on the internet. After all, our attention is our most valuable commodity. Here is how to use the SIFT approach.
1. Stop and smell the misinformation
Of course, I don’t mean literally sniff your screen. That would be weird. There’s an old expression “stop and smell the roses,” meaning slow down and notice what’s around you. Being mindful as you scroll your social feeds is the first step toward spotting misinformation on social media. When we are intentional about our social media use, we are less likely to unconsciously slip into relying on mental schema and heuristics. If you find yourself agreeing with a meme, consider the source.
2. Investigate the source
Before hitting share or passing on the tidbit of information as a fact, take a few seconds to do a little digging. Instead of spending any of your valuable time and attention assessing the veracity of the meme’s message itself, first check out the source. For example, is there a quote attributed to someone? What’s the name of the original person or group promoting the potential misinformation in the social media feed – i.e., if your friend shared it, where did they share it from? For instance, in the example above, the source appears to be a group or page called Heart of Texas.
Google (or other search engine of choice) the source. Do they have any other online presence? If so, what can you see about any potential biases they may have? While Wikipedia isn’t a great source for your research paper, it can still be really handy to quickly identify if the source is known for having a particular bias. If you can’t find any other online presence than their social media profile, or if you quickly find they are likely a biased source, best to categorize the meme as misinformation and move on. A Facebook group with murky origins may very well be fake.
3. Find better coverage
If you’re still not sure, try Googling the main idea from the meme or social media content. What are reputable sources saying? Does the statement in the meme appear to be inline with scientific or other consensus on the issue, or is it an outlier? In either case, proceed with caution. However, an outlier is more likely to be misinformation.
4. Trace back to the original source
Is the information out of context from the original? Was it from a parody source? Did that person truly say that? Conduct a reverse image search to see if the image or video has been doctored or misleadingly edited. It’s common for memes to misrepresent a quote or inaccurately summarize an argument. If so, proceed no further. If you need to investigate more, see if the original itself is based on reputable sources and uses statistics responsibly.
The key for casual users is to move relatively quickly. If the source is likely trustworthy, then you can spend more time diving into the actual information if you like. If not, bail out and move on with your day. It’s not worth your time or your cognitive capacity to feed the algorithm.
Monitoring Social Media Misinformation
Further complicating matters, not all major social media platforms have policies prohibiting misinformation. This is slowly changing in regard to issues such as the pandemic and the 2020 election, however. For those that do, monitoring such content must rely heavily on artificial intelligence due to the sheer volume. Artificial intelligence algorithms can still have a difficult time with nuance and context. Consumer Reports has an informative round-up on these policies.
And when humans are part of content moderation efforts, it takes a heavy toll on the workers. The moderators must wade through some of the darkest content on the internet. Some Facebook content moderators report observing colleagues becoming negatively influenced by the content they were monitoring. In 2020, Facebook agreed to pay $52 million to moderators who developed mental health issues due to their moderation work.
Think tank RAND Corporation has compiled a helpful list of additional tools to help you avoid online misinformation. When researching for a class paper or project, it’s also still a good idea to search more deeply in reputable sources. Did you know the librarians at the Cornette Library will help you conduct research? I promise that it isn’t cheating to use their help to generate a list of sources for your next project. They’ll walk you through the process, and are a great campus resource.
You can avoid getting sucked into a social media misinformation rabbit hole with a little mindfulness and these tools.
Dr. Heidi E. Huntington
Assistant Professor of Business Communication