Deepfake Fraud: The Threat To Corporate Reputation and How To Fight It 

In one video, a State Department spokesman appears to suggest that the Russian city of Belgorod is a legitimate target for Ukrainian strikes using American weapons. 

In another, the Hollywood actor Tom Cruise narrates a documentary criticizing the organization behind this summer’s Olympic Games in Paris.

A visual from the fake documentary Olympics Has Fallen. Illustration: Storm-1679/Microsoft Threat Analysis Center.

The problem with both clips? They’re deepfakes — bogus videos generated by nefarious groups or individuals using artificial intelligence. 

Concern about deepfakes usually focuses on their threat to democracy, amid fears that manipulated audio, video and images could derail elections in the UK, the US and elsewhere. 

Yet the deepfake threat to corporations is growing fast, too — as British engineering firm Arup recently found out the hard way. 

Earlier this year, it fell victim to a deepfake scam after an employee in Hong Kong was duped into sending fraudsters $25m. 

The scam began when the employee received a message regarding a “confidential transaction” from what appeared to be the firm’s UK-based chief financial officer. 

The message linked to a video call in which the deepfake CFO asked the employee to make 15 transfers to five Hong Kong bank accounts. 

Though the employee was suspicious at first, the video impersonated several other employees — people who “looked and sounded just like colleagues he recognized”, CNN reported. 

And Arup is by no means alone. 

Just last month, the chief executive of the world’s biggest advertising group, WPP, was — unsuccessfully — targeted by a deepfake scam using an AI voice clone.  

Meanwhile, Chinese state media reported a similar case in which criminals duped an employee into sending them the equivalent of $262,000 after a video call with a deepfake of her boss.  

All three cases are part of a growing trend enabled by generative AI technology. 

According to CNBC, “various generative AI services can be used to generate human-like text, image and video content, and thus can act as powerful tools for illicit actors trying to digitally manipulate and recreate certain individuals.”  

The scams can range from invoice fraud and phishing scams to deepfake voices and images. 

 And the barrier of entry to cyber criminals is getting lower. As one expert told CNBC: “They no longer need to have special technological skill sets.” 

The Threat To Corporate Reputation

The two successful deepfakes described above were about defrauding companies and lining fraudsters’ pockets. 

In other words, while they may have diminished confidence in the companies’ security measures — if not their savviness — the deepfakes weren’t designed to harm reputations.  

But they could just as easily be deployed to do so — as the following hypotheticals show. 

Impersonation of Executives:

Cybercriminals could create deepfakes that convincingly mimic the voices and appearances of senior executives. These impersonations could lead to unauthorized transactions — as was the case with Arup — data breaches, confidential information leaks, decision-making disruptions and fake announcements or statements.

Manipulation of Communications:

Deepfakes could be used to alter or fabricate corporate communications, including emails, videos and social media posts. These manipulated messages could spread false information, sow confusion and erode trust among stakeholders such as employees, investors, customers, regulators and the media. 

Stakeholder Trust:

Trust is the cornerstone of any relationship. When stakeholders perceive that a company cannot protect itself against security breaches and fraud, their confidence in the company’s leadership and operations diminishes. 

Market Value:

Reputation-related issues could diminish market value. Investors are wary of companies that seem vulnerable to cyber threats, as such incidents imply potential future risks.

Customer Loyalty:

Consumers are less likely to stay loyal to a brand they perceive as insecure or dishonest. Deepfake incidents can drive customers away, especially if the fraud involves manipulated communications that mislead or harm them.

How Can Companies Combat Deepfake Fraud?

Given the potential damage, companies should develop strategies to protect themselves against deepfakes. 

Measures that could help include investing in deepfake detection technology to identify fraudulent content before it does any damage; training employees to recognize and respond to deepfakes; and establishing clear communication protocols to verify the identity of individuals before executing sensitive requests such as financial transactions. 

But the reality is that some deepfakes won’t get spotted — especially if the technology behind them gets more sophisticated. What then? 

If a deepfake-related crisis does break out, corporate communicators should stick to the following principles: 

Fail to prepare — and prepare to fail

From briefing the C-suite to handling the media, corporate communications teams should already know who will engage stakeholders and how (and the crisis comms agency ought to be on speed dial).

Authenticity is key

When dealing with the discovery of the deepfake and its consequences, companies should be truthful and transparent and live up to their values.

Find out what stakeholders think

During a crisis, companies need to know how their response is resonating with stakeholders. The best way to do that is with Caliber’s Real-Time Tracker. 

Being able to see at any given moment what stakeholders think — and how they’re likely to behave — means companies can make real-time decisions about what is or isn’t working, allowing them to pump the brakes, hit the gas or change course entirely if necessary. 

But here’s the thing. Companies that already enjoy high levels of stakeholder trust handle crises better. That’s because they can mitigate reputational damage and shorten the recovery time. To help them build a robust reputation and increase trust, smart companies continually track the perceptions of relevant stakeholders. 

In other words, using stakeholder-tracking technology shouldn’t be an add-on when the proverbial hits the fan — it should already be part of any communications team’s toolkit. 

Conclusion

Just like the Spanish Inquisition in Monty Python’s skit, nobody expects to be duped by a deepfake. 

But what happened at Arup is a stark reminder that deepfake fraud is a significant and growing threat to corporate reputation — and that falling for one can be costly. 

Still, as deepfake technology becomes more accessible and sophisticated, companies can take proactive steps to protect themselves. 

This means establishing clear communication protocols, investing in deepfake detection technologies, and conducting regular staff training, 

But it also means leveraging technology to track stakeholder perceptions — not only to build a robust reputation but to see in real time if the crisis response is working. 

The question, then, is simple — how prepared is your company?