Customer Service
1-800-461-7353

corporate PR responses to deepfakes Deepfake videos, altered video recordings, bring new threats to corporate reputations.

Advances in video editing enable fraudsters, pranksters or criminals to create videos that portray people doing things they never did or saying words they never said. The videos can dub words on speakers or replace an image of someone’s head with another person. Free or low-cost video-editing tools for producing the fakes are now becoming widely available.

President Trump and his political supporters recently shared a doctored video of Speaker of the House Nancy Pelosi. The clip, which spread virally on Facebook, Twitter and YouTube, was slowed to make it appear as if Pelosi was slurring her words and drunk. While it’s not certain who first posted the manipulated clip, the Daily Beast attributed the deed to a Trump fan and occasional sports blogger from the Bronx.

Some experts call such altered videos “cheap fakes” or “shallow fakes.” They don’t completely replace the speaker’s words. Shortly later, a real deepfake video showed Facebook CEO Mark Zuckerberg hypothesizing that one man could control the future by obtaining stolen personal data. The video was part of a commissioned art installation in the UK meant to show how technology can be used to manipulate data.

Brands Also Face Deepfake Risks

Politicians and other observers worry that foreign actors could use deepfake videos to interfere with elections. Most troubling, a fake video spreading virally just before an election could sway voters before it’s exposed.

The doctored videos also pose a great reputational risk to brands. “If they’re so inclined, these mischief-makers could create a massive public relations nightmare for brands,” says David Pring-Mill, a filmmaker and writer who has written about the topic. “People are currently talking about deepfakes within a political context but the business landscape isn’t immune.”

Unscrupulous businesses may create and spread deepfake videos to attack competitors. Mainstream corporations won’t risk their reputation, but small overseas start-ups or unethical entrepreneurs might, Pring-Mill says.

Altering videos or images can prompt a backlash. To create the impression of gender diversity, a communications firm photoshopped two women into a photo of male tech entrepreneurs. When online sleuths noticed the alteration, the image highlighted the lack of diversity at tech firms.

Extremists, criminals or pranksters could create a fake video of a CEO making sexist or racist comments or offering a bribe. They could post fake videos on fake news sites and disseminate them on social media, just as they’ve been spreading fake political news.

Corporate leaders and communications personnel might have trouble convincing the public that a video isn’t real. As deepfakes become more common, businesses may also have difficulty convincing people that real videos are indeed real.

How to Combat Deepfake Videos

Just as PR and communications pros have crisis plans for handling fake news about their organizations, corporations will need to include deepfakes in their PR crisis management and reputation management plans.

Monitor all media. A major political campaign has an army of supporters who can report fake videos or fake news reports. Most organizations lack that kind of staff. Businesses, celebrities, politicians and non-profit organizations can identify misinformation or propaganda campaigns by employing a media monitoring service that alerts them when fake news sites mention their brands, products or other selected keywords. Swift decision-making is critical. So preparation is important. There’s no time for assembling a crisis team or for arranging meetings with many participants.

Consider human analysts. Automated monitoring and measurement software may not be able to detect a fake news story. That may require human reviewers and analysts who are knowledgeable about the organization and its products. The content analysis to identify fake news stories could be outsourced to the media monitoring service or done by the organization’s own staff.

Invest in technology. Tools can detect video forgeries or raise red flags. Experts predict a technological arms race between creators of fake videos and those trying to spot and debunk the forgeries before they spread widely on social media. Social media analytics will provide a critical weapon.

Get to know the social media networks. Develop a line of communication with social media platforms as early as possible. That makes it easier to report accounts that violate the network’s community standards. But don’t count on social networks to remove fake videos, even after receiving complaints. Only YouTube took down the fake Pelosi video; Facebook and Twitter let it stand. Facebook also let stand the fake Zuckerberg video since it couldn’t have a different policy for its CEO than it did for the Speaker of the House without incurring public wrath.

Pressure social networks. Companies can form coalitions to pressure social platforms to invest in technologies to identify deepfakes. If needed, they can withhold advertising to spur action. “Pushing these platforms to take the future of misinformation seriously would be good not only for corporations but also for society at large,” write experts in the Harvard Business Journal.

Consider legal options. Companies and celebrities have legal recourse, explain Ryan J. Black and Pablo Tseng, lawyers and technology experts at McMillan LLP. They include copyright infringement, defamation, violation of privacy, appropriation of personality, the criminal code, human rights complaints, intentional infliction of mental suffering, and harassment. One problem: Identifying who created and posted the deepfake may be difficult, given the anonymity many networks provide.

Video for the record. Record video of executives and other company spokespeople at public speaking engagements. Communications personnel can provide that raw footage to the media and the public to expose any deepfake videos from the event.

Publish more video content. If consumers, journalists and others see videos of the corporation’s leaders and other representatives, they’re more likely to suspect fake videos. The video may seem out of character.

Turn to supporters. A political campaign or corporation can turn to its supporters to rebut fake videos. Corporations and other organizations can develop relationships with potential advocates, including employees, customers, social media followers, who can report and refute doctored videos and other forms of misinformation.

Devise a triage plan. The course of action of finding a fake news or deepfake video depends on the likelihood that the fake content will reach the public and key audiences and its potential impact on your organization. Weigh the legitimacy of the information source and if coordinated behavior is spreading the deepfake video or fake news. If the chances that misinformation will spread and its possible impact is low, doing nothing is often the best option.

“It may seem counterintuitive, but sometimes it is best to ignore disinformation,” advises Lisa Kaplan, former digital director for the Angus King campaign for U.S. Senate, in an article for Brookings. “If a false news story is written but no one reads it, the campaign drawing attention to the story could increase the story’s reach.”

Medium danger calls for refuting misinformation with the help of supporters and spreading positive messages about the organization.

The most dangerous threats call for rebutting the falsehood in addition to public statements and demands for retraction.

Bottom Line: Besides posing a serious threat to national security and American democracy, deepfake videos create major corporate reputational risks. Experts recommend that brands start taking precautions immediately and prepare plans to respond to deepfake video attacks.