Many people blame social media networks for the rapid spread of fake news stories.
But our brains may be the real cause. Research shows that fresh, interesting lies are shared more often and spread faster than the boring old truth. Fake news publishers seem to intrinsically understand that we can’t resist what’s new and novel. Purveyors of fake news harness our hardwired need to pay attention to novelty.
Specifically, blame lateral inhibition, a neurobiological process our brains use to intake information and understand the world, for the proliferation of fake news. Lateral inhibition helps our senses manage too much data, explains Bill Paarlberg, editor of The Measurement Advisor from Paine Publishing.
When stimulated, sensory cells suppress the activity of cells around them. That helps us process the large amount of visual stimuli by emphasizing differences between points of light. In other words, lateral inhibition provides an evolutionary advantage by highlighting motion and what’s new and different. That’s why “New!” and “Improved!” are powerful in advertising and that “Breaking News!” grabs our attention.
“Back in the days of yore, when a predator appeared on the horizon we needed to see it and react,” Paarlberg explains. “Today, whether you are sitting at a board meeting or standing at home plate, sensing change is what allows you to respond and adapt to your environment.”
“Us oh-so-smart humans don’t just fall for fake news, we are attracted to it. Like moths to a flame, we follow its fascination to our doom,” Paarlberg says.
Research from the Massachusetts Institute of Technology corroborates his theory. The analysis of 126,000 stories shared on Twitter found that false reports spread both faster and farther on social media because they’re more novel than factual reports. While often blamed for spreading falsehoods, bots are not the culprit. A bot detection algorithm researchers used found that bots spread false news and real news at the same rate.
Most often, the fake news originates on web sites that are designed and determined to produce misinformation, false information, or extremely slanted perspectives on the news. Many are hate sites. Some have extreme liberal or conservative positions. Some just want to spread rumors or undercut the reputation of politicians, celebrities or businesses. Some produce spoofs or satire than can be easily misinterpreted – and spread as true. Occasionally, but rarely, fake news starts with an error in an otherwise reliable traditional news source. Those sources correct mistakes; fake news sources never do.
The Role of Human Biases and Social Media Algorithms
Many believe human biases play a role in spreading fake news. We’re more likely to believe and share news, especially inflammatory news, that supports our previously established beliefs even if the reports are false.
Social media algorithms are also a major factor in the spread of fake news. The algorithms show social media users content with high engagement – posts already liked, shared and commented on. When many people share false but emotional posts, the algorithms display the posts before more users, creating a vicious cycle.
“At its worse, this cycle can turn social media into a kind of confirmation bias machine, one perfectly tailored for the spread of misinformation,” states the Brookings Institution researcher Chris Meserole in the Lawfare blog.
Two tweets from CBC journalist after a man drove a van into a crowd in Toronto, killing 10 and injuring over a dozen people, prove the concept. One tweet cited an eyewitness who identified the attacker as “angry” and “Middle Eastern.” The other tweet correctly identified him as “white.”
The tweet with incorrect information spread exponentially. A small subset of Fatah followers engaged with the tweet, boosting it in Twitter’s algorithm and creating a cycle of greater exposure and engagement. The correct tweet received minimal initial engagement and so never gained much traction.
Clear Solutions
Meserole suggests solutions for Twitter, some that are relatively simple.
The network could promote police or government accounts during an attack to help disseminate accurate information as quickly as possible. It could display a warning about unreliability of initial eyewitness accounts at the top of its search and trending feeds.
Twitter could also update its “While You Were Away” and search features to display accurate information once police had identified the attacker.
It can hire an editorial team to track and remove blatant misinformation from trending searches or allow users to flag misinformation they find. Unfortunately, even 10 days after the Toronto attack many Twitter users continued to believe and share the incorrect information.
The Media Monitoring Answer
Media monitoring can help protect businesses and well-known individuals, who are often the subject of fake news attacks.
The Glean.info Fake News Monitoring Service has identified about two thousand online sites that in some way propagate fake news. The media monitoring service captures all the content of the sites and immediately notifies organizations, celebrities or their PR personnel when fake news sites mention their names, products or other selected keywords. That allows PR to quickly spot misinformation and respond swiftly.
Bottom Line: While social media networks can take steps to limit the spread of fake news and other types of misinformation, some blame human fallibility for promoting fake news. Fake news publishers take advantage of our innate attraction to new and novel stories. Changing both human habits and social media features will be extremely challenging. Therefore, using a media monitoring service to monitor content on sites known to produce fake news can identify fake news about politicians, celebrities, brands, companies and other organizations. If the monitoring service reports a fake news story about you or your organization, taking aggressive action to correct and counteract the fake news can go a long way to fixing the problem and protecting your reputation.
William J. Comcowich founded and served as CEO of CyberAlert LLC, the predecessor of Glean.info. He is currently serving as Interim CEO and member of the Board of Directors. Glean.info provides customized media monitoring, media measurement and analytics solutions across all types of traditional and social media.