Share This:

Business email compromise (BEC) remains one of the most financially damaging cybercrimes targeting U.S. companies, with losses reaching $2.77 billion across more than 21,000 incidents in 2024, according to the FBI’s Internet Crime Complaint Center. Even more concerning is how quickly the threat is evolving: by mid-2024, an estimated 40 percent of BEC phishing emails were AI‑generated, making them far harder to spot.

But despite AI’s growing role in crafting highly convincing messages, the root vulnerability remains unchanged: human behavior, says Paul Perry, practice leader of the Risk Advisory & Assurance Services Group at accounting firm Warren Averett.

“The majority of successful BEC campaigns occur because the human in the loop is not looking for the common red flags associated with phishing emails,” Perry says. “Since the success hinges on human behavior, it’s human behavior that needs the most training—why people do what they do and what red flags to look for.”

Human behavior still drives the risk

The challenge, Perry explains, is that employees often rely on System 1 thinking—fast, instinctive, emotion‑driven—rather than the more deliberate, analytical System 2 thinking.

“There needs to be a healthy dose of both,” he says. “Does this email make sense? Does it seem off compared to our business norms or society’s norms?”

Perry highlights several red flags companies should reinforce through training: multiple or conflicting requests in a single message, manufactured urgency, unusual asks for the sender or recipient, and—thanks to AI—emails that lack the imperfections humans typically make.

“We used to tell people to look for odd language or misspellings. Now, imperfections can actually signal authenticity,” he says. “Ultimately, BEC is a people problem for both business and society, not a technology one. And the only reliable control for human behavior is education, education, education.”

This heavy reliance on human factors puts MSPs in a difficult position as they try to keep clients protected.

Why traditional email training no longer works

AJ Thompson, Chief Commercial Officer at Northdoor plc MSP, agrees that the old playbook no longer works.

“Attackers are using AI to study writing style, tone, job roles, and internal processes. The result is emails that look routine, arrive at the right moment, and don’t trigger obvious red flags,” Thompson says. Often, there’s no malicious link or attachment—just a believable request that fits smoothly into normal business flow.

Training employees to look for errors, strange sender addresses, or urgent language isn’t enough anymore.

“Modern BEC emails often have none of those signs,” Thompson says. “Staff may do everything ‘right’ and still be fooled because the email mirrors how colleagues normally communicate.”

MSPs, he argues, must go beyond awareness training.

“Email security needs context. That means checking behavior, relationships, and intent—not just content,” Thompson says. Controls such as enforced approval for payment changes, strict identity checks for high‑risk actions, and better visibility into account activity are now essential. “BEC is as much a process failure as a technical one.”

A modern BEC defense strategy for MSPs

Emily Holyoke, CEO of Not a Standard, a human-behavior‑focused security and intelligence consultancy in Australia, outlines a practical framework MSPs should use to defend against the new wave of AI‑enhanced BEC attacks.

“BEC 2.0 is less about ‘spot the phish’ and more about ‘protect the payment process,’” Holyoke says. “AI just makes impersonation cheaper and more convincing.” If someone can approve a payment change in the same email thread that requested it, she adds, “that’s a design flaw, not a training gap.”

She offers this checklist for MSPs:

A) Identity + access (stop account takeovers)

  • Enforce MFA everywhere (prefer phishing‑resistant methods).
  • Disable legacy/basic authentication; minimize persistent admin privileges.
  • Enable conditional access and risky login alerts (new device, unusual location, impossible travel).
  • Protect admin accounts with stronger policies and separate identities.

B) Mailbox protections (stop silent persistence)

  • Monitor/alert for new or changed inbox rules; block external auto‑forwarding where possible.
  • Alert on suspicious OAuth app consents.
  • Strengthen checks for reply‑to spoofing and display‑name impersonation; use external sender banners.

C) Email authentication + domain/brand protection

  • Implement SPF, DKIM, and DMARC—beyond monitor mode.
  • Monitor lookalike/typo squatted domains and brand impersonation attempts.

D) Finance workflow controls (where MSPs prevent the losses)

  • Require out‑of‑band verification for bank detail changes, new payees, vendor onboarding changes, and urgent wires.
  • Use two‑person approval for payments above a threshold.
  • Use trusted contact details from an internal directory—not from the email thread.
  • Introduce a “cool‑off” period for first‑time payments or changed bank details.

E) Detection tuned to BEC behaviors (not just phishing)

  • Flag mailbox rule creation, mass searches, unusual login patterns, and unexpected sent‑items activity.
  • Monitor for keywords suggesting payment changes (e.g., “updated details,” “urgent transfer”).
  • Detect thread hijacking: slight domain changes, altered reply‑to, “I’m in a meeting/can’t talk” pressure tactics.

F) CEO impersonation + LinkedIn/OSINT exposure

  • Treat leadership changes as high‑risk periods; remind staff of impersonation patterns.
  • Policy: no payment, gift card, or credential actions from personal email/SMS/WhatsApp—verification required.
  • Provide a one‑page “exec impersonation” playbook with contacts and verification steps.
  • Improve LinkedIn hygiene: limit direct contact details and unnecessary org‑chart visibility.

G) Training MSPs can operationalize

  • Run practical drills with finance/admin teams (invoice change, CEO urgent request, bank‑change scenarios).
  • Give clients one simple rule: “No bank detail change or urgent payment without verification through a trusted channel.”

In the age of AI‑powered deception, the organizations that will stay safest are those that reinforce their people, strengthen their processes, and never assume an email is what it seems.

Photo: one photo / Shutterstock


Share This:
Kevin Williams

Posted by Kevin Williams

Kevin Williams is a journalist based in Ohio. Williams has written for a variety of publications including the Washington Post, New York Times, USA Today, Wall Street Journal, National Geographic and others. He first wrote about the online world in its nascent stages for the now defunct “Online Access” Magazine in the mid-90s.

Leave a reply

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.