Share This:

Deepfake fraud losses in North America alone exceeded $200 million in the first quarter of 2025, according to Keepnet Labs. Meanwhile, the Deloitte Center for Financial Services projects that generative AI‑enabled fraud in the U.S. will grow from $12.3 billion in 2023 to $40 billion by 2027. Among all of these threats, voice cloning has emerged as one of the most dangerous.

Pindrop’s 2025 Voice Intelligence and Security Report found that voice deepfake attacks rose 680 percent year‑over‑year, with some tools now able to clone an individual’s voice using as little as three seconds of audio.

Deepfake attacks are no longer theoretical

Judson “JB” Stringer, Security Compliance Senior Engineer at Managed Services Group, said that across MSP environments, deepfake‑enabled executive impersonation is no longer hypothetical—it is actively being used to facilitate fraud.

“We’re seeing highly targeted attacks where malicious actors use AI‑cloned voices and synthetic video to impersonate CEOs or CFOs and pressure finance teams into authorizing urgent wire transfers,” Stringer said.

He added that these incidents often coincide with executive travel or other plausible out‑of‑office scenarios. Attackers frequently reinforce the deception with spoofed follow‑up emails, creating a convincing, multi‑channel narrative that exploits trust rather than technical vulnerabilities.

Why traditional security controls fall short

According to Stringer, preventing these attacks requires controls that do not rely on human perception. Email authentication standards such as DMARC, DKIM, and SPF remain critical for blocking spoofed messages that often accompany voice or video fraud. AI‑driven call analytics can also help identify synthetic voice characteristics that are indistinguishable to the human ear.

Just as important, wire transfer workflows must enforce multi‑factor authentication and mandatory callback verification, ensuring no transaction is ever approved based solely on a phone or video request—regardless of who it appears to be.

Reducing risk in high‑value financial workflows

From a practical standpoint, Stringer emphasized that MSPs should remove ambiguity and trust‑based decision‑making from high‑risk financial processes. That begins with enforcing dual authorization for wire transfers above defined thresholds and requiring out‑of‑band verification through a known, pre‑established channel.

Additional safeguards—such as verbal codewords shared only between executives and finance teams, mandatory callback procedures, and detailed endpoint and communications logging—help reinforce accountability and improve forensic visibility.

Preparing employees for the reality of deepfakes

Equally critical, Stringer said, is preparing employees for deepfake‑driven threats. MSPs should provide deepfake‑specific security awareness training and run tabletop exercises that simulate executive impersonation scenarios. These exercises help employees build muscle memory around verification steps, especially under pressure.

He also advised organizations to audit and limit publicly available executive audio and video where possible, reducing the raw material attackers rely on to train voice‑cloning models.

Why deepfakes hit MSPs where it hurts most

Meanwhile, Stanislav Kazanov, Head of GRC, Cybersecurity & Sustainability and Head of Data at Innowise, said that for MSPs, deepfakes represent an outsized risk precisely because of their ability to circumvent traditional security measures and exploit the two weakest points in the MSP model: the help desk and the client’s finance department.

Kazanov cautioned that the media’s focus on large-scale deepfake video meetings (a 2024 case in Hong Kong received widespread media attention) can give a misleading picture of how these attacks actually play out day to day. The more common scenarios, he said, are lower-tech and harder to catch.

The real-world face of deepfake fraud: Low-tech, high pressure

In one typical attack pattern, an attacker grabs three seconds of an executive’s voice from a publicly available source like a podcast or corporate video and uses an AI voice generator to call an MSP’s help desk, impersonating the executive in a panic and claiming to have lost their phone, urgently needing an emergency MFA reset.

In another, an attacker bypasses live interaction entirely by leaving a frantic voice message or sending an urgent WhatsApp audio clip to a junior accounts payable employee. The message creates pressure without giving the employee time to think — for example, “I’m about to get on an airplane, I need you to wire this invoice to avoid losing the contract.”

From detection to verification: Rethinking trust in an AI world

Kazanov also warned about deepfake detection tools, calling the reliance on them a trap.

“Risk-wise, relying on defensive artificial intelligence to detect offensive artificial intelligence is a losing proposition because generative models are evolving at a much faster rate than heuristics can detect them,” Kazanov said. “If an MSP relies exclusively on some sort of software filter to detect and prevent a deepfake, their likelihood of being breached will only worsen over time.”

The answer, he said, is verifying the person not the media. Kazanov recommended that MSPs move away from SMS codes and push notifications for MFA entirely, replacing them with FIDO2 hardware keys such as YubiKeys. If a deepfake tricks an employee into granting account access but there is no physical hardware key present, the attack fails.

He also pointed to continuous authentication as a critical layer, arguing that modern network architectures should continuously ingest behavioral telemetry to catch anomalies in real time. If an employee is calling from London to authorize a wire transfer while their laptop’s IP address and mobile device’s GPS both show Tokyo, a conditional access policy should lock the account immediately.

The zero-cost defense

Kazanov’s most strenuous recommendation is the simplest and cheapest one, what he called a strict out-of-band verification mandate.

“While you will never be able to exceed AI’s technology, you can out-process it,” Kazanov said, noting that the most effective defense against deepfake financial fraud costs nothing to implement.

His proposed procedure is straightforward: any request from a CFO or executive to move funds or change vendor routing numbers must require the financial controller to hang up, pick up a different device, and call the executive back on a known internal number. If the executive doesn’t answer, the transaction doesn’t happen, full stop.

Underlying all of these controls, Kazanov said, is a fundamental shift in how organizations need to think about trust itself.

“In an AI world, voice and video are no longer valid authenticators of identity — they are simply data streams,” Kazanov said. “All trust must shift away from human biological senses and toward cryptographic or procedural verification.”

Photo: tete_escape / Shutterstock


Share This:
Kevin Williams

Posted by Kevin Williams

Kevin Williams is a journalist based in Ohio. Williams has written for a variety of publications including the Washington Post, New York Times, USA Today, Wall Street Journal, National Geographic and others. He first wrote about the online world in its nascent stages for the now defunct “Online Access” Magazine in the mid-90s.

Leave a reply

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.