Today we celebrate: International Romani Day

Deepfakes Attack Corporate Image: How Companies Protect Leaders’ Reputation in the AI Era

Deepfake technology poses a growing threat to corporate security and reputation. This article discusses its impact and how companies are responding.

Deepfakes: The Latest Weapon in Attacks on Business

Imagine this scenario: your CFO receives a call from someone whose voice sounds exactly like the company’s CEO. They say that $25 million must be transferred immediately to an investment account. The CEO sounds stressed and explains why it’s urgent. It turns out it was all fake — a deepfake video and synthetic voice. Unfortunately, this scenario is no longer fiction.

In 2024, the British engineering firm Arup lost $25 million in a fraud orchestrated using deepfakes, marking the most dramatic example of this new class of corporate security threats. A member of the finance team was convinced they were speaking to several board members, all shown via deepfake video during a videoconference.

The statistics are alarming. According to a report by the World Economic Forum, deepfake-related fraud increased by 1740% in North America between 2022 and 2023. In the first quarter of 2025 alone, financial losses linked to deepfakes exceeded $200 million.

What distinguishes corporate deepfakes from political ones? They are highly personalized, contextually perfect attacks — “surgical strikes” aimed at trust within business networks.

How Deepfakes Became Accessible to Everyone

Just five years ago, creating a convincing deepfake required advanced software and several hours of work. Today, the cost is just $15, and the time — 3 minutes.

Fraudsters can now produce convincing CEO deepfakes within minutes using publicly available AI tools. Entrepreneur points out that the minimum requirements to create a deepfake are: a few seconds of the victim’s video footage, their voice from public materials (conferences, interviews), and access to free AI tools.

The biggest source of material for scammers are public recordings: press conferences, media interviews, LinkedIn or YouTube content. Any 10-second clip can be enough to train an AI model.

The KPMG deepfake threat report indicates that companies with higher media visibility (large corporations, industries with broad reach) are preferred targets. Individuals in finance, HR, or decision-making roles are especially vulnerable.

Why Deepfakes Pose a Particular Threat to PR and Reputation

Traditional communication crises require time to spread: sending out a statement, awaiting questions, preparing responses. With deepfakes, the situation is different — before PR can say “this is fake,” media, employees, and investors have already shared the false video multiple times.

The threat to corporate reputation breaks down into several categories:

1. Financial fraud — as in the Arup case — scammers use deepfakes to convince finance staff to transfer large sums.

2. Blackmail and espionage — deepfakes can be used to blackmail managers or steal commercial information.

3. Threat to investor trust — if a deepfake CEO presents fraudulent transactions, it undermines market trust in real leaders, potentially causing stock value to plummet.

4. Internal confusion — deepfakes can erode trust within an organization if employees are unsure whether communication from management is authentic.

The D&O Diary emphasizes that corporate boards must now add deepfakes to their list of potential crises — just like cyber attacks or data breaches.

Crisis Communication Strategies in the Era of Deepfakes

For PR agencies and corporate communication teams, deepfakes introduce a whole new category of crises. Here’s what companies should do:

Proactive employee education
The entire organization must be trained to recognize deepfakes. Warning signs include: strange lighting, unnatural gestures, lack of natural blinking, unnatural mouth movements. The best defense is never to act solely on video — always verify information through established communication channels.

Financial transaction verification protocols
Any transfer above a certain threshold must undergo multi-channel verification — a phone call from a known number, an email from an official address, and confirmation from at least two individuals.

Prepared crisis communication
PR teams should have ready-made communication templates in case deepfakes involve their leaders. Key elements include rapid confirmation of leaders’ authenticity, explanation of the situation, and instructions for the media on how to verify genuine statements.

Investment in verification technology
Some companies are already investing in deepfake detection systems that can automatically identify manipulations in video materials. The SEC regularly tests such solutions.

Challenges for PR Teams

The deepfake situation creates a paradox for PR. On one hand, corporate communication must be more transparent and accessible — to reach stakeholders. On the other hand, greater visibility means a “security tax” — more publicly available material for scammers.

The paradox also applies to authenticity: if the CEO must verify every statement to confirm it’s really them, communication becomes slower and less spontaneous. If they don’t verify — they risk fraud.

The The D&O Diary article notes that uncertainty itself can damage reputation — “Our CEO had to confirm every statement” sounds almost as bad as a real deepfake.

The Future: What Lies Ahead

PR and security experts predict that deepfakes will become even more sophisticated. By 2026, we will likely see deepfakes nearly impossible to distinguish from the original, even for specialists.

At the same time, deepfake detection technologies will evolve. Industry standards, like those promoted by the World Economic Forum, will become mandatory for public companies.

For PR agencies and communication teams, deepfakes are the new reality. Companies that ignore this threat will face greater challenges — not only financially but also reputationally. In an era where deepfakes can be created within minutes, trust and verification become the most valuable communication assets.

A scenario where a scammer impersonating the CEO runs the company for weeks before anyone notices? Unfortunately, that has already happened. The question is no longer “can it happen?” but “when it happens, will you be ready?”


Share: