WordPress database error: [Table 'wordpress.wp_cleantalk_sfw' doesn't exist]
SELECT network, mask, status, source FROM wp_cleantalk_sfw WHERE network IN (301989888,306184192,308281344,308346880,308348928,308349952,308350464,308350528,308350544) AND network = 308350544 & mask AND 68116 ORDER BY status DESC

Fake News: Too Complicated for Simple Solutions

fake news monitoringCombatting fake news is complicated. Even defining fake news is challenging. Misinformation and disinformation comes in a variety of flavors from a range of actors with different motivations.

Researchers are striving to learn why people share fake news articles pretending to be factual, why some people are more prone to believe and share false news and why some articles are more likely to spread on social media. Claire Wardle, research director at First Draft News, a nonprofit supporting truth and trust in news, says defining and identifying fake news depends on:

The type of content. The types of false or misleading content range from inaccurate headlines, manipulated imagery to stories that are 100 percent false.

Motivations for fake news creators. Motivations include parody, provocation, passion, partisanship, profit, political influence or power, and propaganda. Poor journalism can mislead through inaccurate headlines and false context.

How the content is disseminated. Bot networks sometimes spread false reports in sophisticated disinformation campaigns. Loosely connected groups promote misinformation to influence public opinion. Journalists under time pressure may amplify misinformation. Many people share fake news reports that they mistake for fact.

Easier to Spread Fake News Now

Spreading fake news is easier to do than ever, Wardle says. Fake news purveyors know how to trick us by sending coordinated, consistent messages. Viewers are more likely to believe something if they’ve seen it several times a day. Overwhelmed by information, viewers are more vulnerable.

“We all play a crucial part in this ecosystem. Every time we passively accept information without double-checking, or share a post, image or video before we’ve verified it, we’re adding to the noise and confusion,” Wardle argues. “The ecosystem is now so polluted, we have to take responsibility for independently checking what we see online.”

Check before sharing should become a new mantra. Be especially wary of posts that rouse anger or that make you feel smug by confirming current viewpoints. Consider pausing to let emotions subside – and to do some fact checking.

Also be wary of satire. I was almost tricked into believing and sharing a satirical piece about the White House that appeared in Extra Newsfeed. When you look at all the headlines on the home page, it’s easy to recognize the satire – and the site acknowledges that “half of these stories are satire.” But, when you see one headline in isolation – as in a Facebook feed – it can sometimes be difficult to differentiate satire – especially if it’s subtle and if you’re inclined to believe the worst about the person that’s the subject of the story. Read such stories with some skepticism.

Media Monitor Provides Protection

Glean.info has identified nearly 2,000 online sites that in some way propagate fake news. Some are satirical such as The Onion. Some are extremist or deal in hate. Others peddle misinformation for propaganda. Even fact-checking sites can inadvertently propagate misinformation in their efforts to correct it.

Businesses, celebrities, politicians and non-profit organizations can identify misinformation or propaganda campaigns by employing a media monitoring service that alerts them when fake news sites mention their brands, products or other selected keywords.

Because misinformation can damage reputations, PR has a responsibility to take a leading role in combating fake news. A media monitoring service with near real-time alerts allows PR to quickly respond to false information about their organizations.

Facebook CEO Mark Zuckerberg defined three kinds of fake news promoters: spammers, state actors, and media outlets that report false information. Facebook can counter spammers, or unethical advertisers, ejecting them from its network, Zuckerberg told Vox co-founder Ezra Klein. Combatting state actors like Russian bot farms will likely remain an ongoing battle, although Facebook has removed thousands of fake accounts. Fake news from legitimate news media outlets that believe they’re posting accurate content presents the most challenging issue, he says.

Facebook’s New Plan to Limit Fake News

To meet that challenge, Facebook has introduced a new plan. Facebook hopes to provide context about articles by showing a Wikipedia link to the publisher, related articles on the same topic, information about how many times the article has been shared, and where it is has been shared.

Facebook will introduce also two new features:

More From This Publisher, which will give people a quick snapshot of the other recent stories posted by the publisher.

Shared By Friends, which will show people any of their friends who have shared the article.

It’s also testing providing biographical information about authors and links to their other stories.

Bottom Line: Many concerned academics and business leaders are examining how to limit distribution of fake news reports. The problem has turned out to be more complicated than some may have first thought. There is a wide range of different types of misinformation that may call for different responses. Solutions don’t seem imminent. For now, everyone must be cautious when sharing content online.