Skip to content
THREAT INTELLIGENCE BRIEFING — FEBRUARY 2026
$0

in reported cybercrime losses — and deepfake fraud is an accelerating contributor

FBI IC3 2024 Annual Report · 859,532 complaints · 33% increase year-over-year

⚠ One deepfake attack every 5 minutes
Scroll to explore

The Arup Deepfake Heist

Hong Kong, February 2024 — the most instructive deepfake fraud case to date

$0

stolen via a single deepfake video conference — every participant was AI-generated

Initial Contact
A finance employee in Arup's Hong Kong office received a message purporting to be from the company's UK-based CFO, requesting participation in a "confidential transaction" video call.
The Deepfake Call
The employee joined a video conference in which the CFO and multiple colleagues appeared on camera. All participants — their faces, voices, and mannerisms — were AI-generated deepfakes. The call was convincing enough that the employee did not question the identities.
The Transfer
Following instructions received during the call, the employee executed 15 wire transfers totaling HK$200 million (~US$25.6 million) to five different bank accounts.
Discovery
The fraud was discovered approximately one week later when the employee checked with Arup's head office and confirmed that no such transaction had been authorized.
Recovery Attempt
Singapore's Anti-Scam Centre and Hong Kong police responded within 48 hours, freezing most of the funds before they could be fully laundered.
Investigation
Hong Kong police traced the attack to organized criminal networks in Southeast Asia with access to sophisticated real-time deepfake technology.
Key Lesson

A single procedural control — requiring a callback to a pre-registered number before executing any wire transfer — would have prevented this entire $25.6M loss. The attackers relied entirely on the employee trusting the visual and auditory evidence of the video call.

Taxonomy of Deepfake Technology

8 categories of synthetic media, from free tools to near-perfect fakes

Technology Accessibility Curve (2017–2026)

From PhD-level expertise to no skills required

Threat level: Low Medium Critical Bar = relative ease of access
2017–18
Research
PhD-level
2019–20
Early Adopt
Strong tech skills
2021–22
Democratized
Moderate skills
2023–24
Consumer
Minimal skills
2025–26
Industrial
No skills required
Harder to create → → → Easier to create

Six Categories of Deepfake-Enabled Threats

Documented incidents, losses, and case studies across each attack vector

Incident Response & Recommendations

A 7-step framework, industry risk profiles, and stakeholder-specific action plans

7-Step Incident Response Framework

STEP 1
DETECT
Security Ops
Immediate
STEP 2
HALT
Employee + Mgr
< 5 min
STEP 3
VERIFY
Employee
< 15 min
STEP 4
PRESERVE
IT Sec / Legal
< 1 hour
STEP 5
ESCALATE
CISO / Legal
< 1 hour
STEP 6
REPORT
Legal / Compliance
< 24 hours
STEP 7
LEARN
CISO / Training
< 1 week

⚠ For wire transfer fraud, the first 48 hours are crucial for fund recovery.

Industry Risk Profiles

Financial Services
KYC bypass, BEC fraud, wire scams
Injection-attack detection, multi-factor verification, behavioral analytics
Healthcare
Deepfake doctors, telehealth fraud
Provider identity verification, liveness detection, claims anomaly detection
Media & Journalism
Fabricated footage, fake sources
C2PA provenance, source callback verification, forensic analysis
Legal
Deepfake evidence, witness impersonation
Digital evidence authentication, expert witness requirements
Government
Official impersonation, election interference
FBI alert monitoring, communication provenance, SemaFor tools
Insurance
Fraudulent claims, synthetic evidence
AI-powered claims verification, metadata analysis

Recommendations by Stakeholder

PriorityActionRationale
CRITICALMandate callback verification for all wire transfers >$10KWould have prevented Arup $25.6M loss
CRITICALDeploy AI-powered deepfake detection on identity verificationKYC bypass is the primary financial fraud vector
HIGHFund red-team exercises with deepfake scenariosTest organizational resilience before attackers do
HIGHEstablish executive impersonation monitoringDetect unauthorized use of executive likenesses proactively
MEDIUMJoin C2PA coalition and implement content provenanceFuture-proof media authenticity infrastructure
PriorityActionRationale
CRITICALImplement injection-attack detection on biometric systemsSoftware-based attacks bypass the camera entirely
CRITICALDeploy multi-factor auth for all high-value transactionsVoice and face are no longer reliable single factors
HIGHIntegrate deepfake detection APIs into verification pipelinesAutomated detection at scale (Reality Defender, Sensity)
HIGHMonitor dark web for exec voice/video samplesEarly warning of impending attacks
MEDIUMImplement C2PA content provenance in media workflowsEstablish chain of custody for organizational media
PriorityActionRationale
CRITICALNever execute sensitive actions based solely on video/audioDeepfake video calls are now indistinguishable from real
HIGHReport suspicious communications immediatelyEarly reporting enables rapid response
HIGHMinimize public sharing of voice/video contentReduces raw material available to attackers
MEDIUMParticipate in deepfake awareness trainingBuild recognition skills and reporting instincts
PriorityActionRationale
CRITICALEstablish a family code word for emergency callsDefeats voice cloning impersonation
CRITICALHang up and call back at a known numberSpoofed calls cannot receive callbacks
HIGHSet social media to private; minimize public voice/videoReduces cloning material (3 seconds is enough)
HIGHNever send money via wire/crypto/gift cards for urgent requestsThese payment methods are unrecoverable
MEDIUMEducate elderly family members about AI voice scamsGrandparent scams specifically target the elderly

Humans Are Losing the Detection Race

Can your team tell the difference? The data says no.

Human Detection
0%
accuracy on high-quality video deepfakes — worse than a coin flip
VS
Machine Detection
80–98%
on known deepfake types — but novel techniques continuously evade current models

Detection Tool Landscape (9 Vendors)

If humans can’t detect deepfakes, who can? These vendors are building automated defenses.

Vendor Modalities Deployment Primary Use Case Key Differentiator
Sensity AIImage, video, audioCloud API, on-premEnterprise threat intelUp to 98% accuracy; monitoring + analysis
Reality DefenderImage, video, audio, textCloud API, SDKEnterprise, governmentMulti-modal; DARPA-backed; 2-line integration
Pindrop SecurityAudio (voice)Cloud API, call centerCall center fraudAudio-specific; real-time monitoring
MicroblinkImage (ID docs, biometrics)Cloud API, SDKKYC/identity verificationIdentity-centric; ties to ID verification
Intel FakeCatcherVideo (face)On-deviceReal-time video analysisPhysiological signal analysis (blood flow)
Microsoft AuthenticatorImage, videoCloudMedia workflowsFrame-by-frame confidence scoring
Resemble AI DetectAudioCloud API, on-deviceAudio deepfake detectionWatermarking + detection in one platform
DeepMediaVideo, imageCloud APIContent moderationReal-time video manipulation detection
C2PA StandardAll (metadata)Embedded in mediaContent provenanceOpen standard; Adobe, Microsoft, Google, BBC

No single tool covers all attack vectors. Organizations should deploy layered detection — document verification + voice auth + video analysis + content provenance.

DARPA Detection Pipeline

The government bet early on detection research — now it’s reaching the private sector.

2017–21
MediFor
DARPA
Statistical detection
2021–25
SemaFor
DARPA + Kitware
Semantic analysis
2024
DEF CON Demo
AI Village
Public showcase
Mar 2025
DSRI Transition
UL Research
R&D collaboration
May 2025
Aptima
Commercial
Market deployment

Regulatory Landscape by Jurisdiction

A patchwork of approaches from comprehensive to non-existent

JurisdictionKey LegislationApproachKey ProvisionsEnforcement
United States (Federal)NO AI FRAUD Act (proposed); FCC TCPA rulingSector-specific; no comprehensive lawAI robocalls illegal; proposed labeling & provenanceFCC fined robocall creator $6M; state prosecutions active
US States40+ bills introduced; CA AB 3211; PA Act 125State-by-state patchworkCA: mandatory watermarks; PA: AI CSAM criminalizedPA AG charged 6+ individuals; growing prosecution
European UnionEU AI Act (2024)Comprehensive, risk-basedSynthetic media provisions; provenance requirementsPhased implementation through 2027
United KingdomOnline Safety Act (2023)Sector-specific, principle-basedPlatform duties of care; NCII criminalizedOfcom enforcement; criticized as insufficient for AI NCII
ChinaDeep Synthesis Regulations (2023)Government-mandatedReal-name auth; algorithm audits; visible labelingActive enforcement
IndiaDraft IT Rules (2025)Platform-focusedVisible labeling; 3-hour takedown for flagged contentPending implementation
South KoreaRevised sexual violence laws (2024)Criminal penalties5+ year sentences for deepfake porn creationActive: 5-year sentence for ~400 deepfake porn videos
Enforcement Gaps
  • No comprehensive US federal deepfake law — reliance on patchwork state laws
  • First Amendment tensions — deepfake laws face free speech pushback
  • Evidentiary challenges — courts lack protocols for authenticating digital evidence
  • Cross-border jurisdiction — creators often operate from different jurisdictions than victims

From Research Lab to Industrial Weapon

How deepfakes went from academic curiosity to $25.6M heists in 7 years

2017
First "deepfakes" subreddit; face-swap papers published
Technology enters public awareness
The term "deepfake" originates from a Reddit user who shared AI-generated face-swap videos. Academic papers on generative adversarial networks (GANs) begin circulating.
2019
UK energy firm CEO voice clone
First major financial fraud incident
~$243,000 stolen
AI-generated voice tricked an executive into wiring funds, marking the first documented case of deepfake-enabled financial fraud.
2020
Microsoft Video Authenticator launched
First major corporate detection tool
Microsoft releases a tool that analyzes photos and videos to provide a confidence score on whether media has been artificially manipulated.
2021
DARPA MediFor concludes; SemaFor begins
Government pivots from statistical to semantic detection
DARPA recognizes that purely statistical detection can be evaded with limited resources, pivoting to semantic-level analysis that understands meaning and context.
2022
Open-source tools proliferate
Democratization threshold crossed
DeepFaceLab, Roop, and other free tools become widely available, lowering the barrier from PhD-level expertise to moderate technical skills.
2023
Slovakia election deepfake; fraud attempts spike 3,000%
Political weaponization + industrial-scale fraud
3,000% SPIKE
Audio deepfake of a Slovak politician released during a 48-hour media blackout contributes to election outcome. Fraud attempts increase 3,000% globally and 1,740% in North America.
JAN 2024
Biden robocall in New Hampshire primary
First criminal prosecution of political deepfake in US
AI-generated voice of President Biden urged Democrats not to vote. Creator Steve Kramer charged with 26 counts and fined $6 million by FCC.
FEB 2024
Arup $25.6M deepfake video conference fraud
Largest single-incident financial loss
$25.6 MILLION
Finance employee tricked via live video conference with deepfaked CFO and multiple colleagues — all participants were AI-generated.
FEB 2024
FCC ruling: AI robocalls illegal under TCPA
First major US regulatory response
FCC unanimously classified AI-generated voice cloning robocalls as "artificial" under the TCPA, enabling state AG prosecution.
JAN 2024
Taylor Swift explicit deepfakes go viral
47M views; NO AI FRAUD Act catalyzed
AI-generated explicit images spread across X (Twitter), prompting White House alarm and bipartisan Congressional action on the NO AI FRAUD Act.
2024
38 countries experience election deepfakes
Global political interference at scale
3.8B PEOPLE EXPOSED
82 political deepfakes targeting public figures documented across 38 countries in a 12-month period, affecting populations totaling 3.8 billion.
2024
ProKYC underground deepfake tool discovered
Criminal-as-a-service for identity fraud
Purpose-built tool discovered by Cato Networks, marketed specifically for bypassing KYC identity verification with synthetic faces and live video.
Q1 2025
Deepfake fraud up 700%; 8M deepfakes online
Industrialization era begins
8 MILLION DEEPFAKES
Volume explosion from 500,000 in 2023 to 8 million by 2025. Fraud attempts increase 700% in Q1 2025 alone.
MAR 2025
25 Canadians indicted for $21M grandparent scam ring
Largest voice-cloning fraud prosecution
Call centers in Montreal used AI-cloned grandchildren's voices to convince elderly victims across 46 US states to pay fake bail fees.
MAY 2025
DARPA awards Aptima commercialization contract
Government detection tech enters private sector
Critical inflection point: DARPA-funded media forensics research begins transition to commercially deployable tools through Aptima partnership.
JAN 2026
WEF “Unmasking Cybercrime” report
“Identity has become synthetic, scalable, and weaponizable”
World Economic Forum's comprehensive report on deepfakes identifies three-tier threat model: individual fraud, organized rings, and state-sponsored operations.
FEB 2026
AMA CEO warns of deepfake doctors in healthcare
New sector-specific threat vector emerges
The American Medical Association identifies deepfake impersonation of physicians as "a threat to public health" in telehealth and social media contexts.

Future Threat Trajectory (2026–2028)

Five emerging threats — color-coded by urgency

2026–2027 Imminent
Indistinguishable Realism
Advances in generative models (Sora 2 and beyond) will make deepfakes indistinguishable from reality for most humans. Detection will rely entirely on provenance and AI tools.
2026–2027 Imminent
Automated Social Engineering
AI-powered scam bots, synthetic job candidates, and coordinated multi-channel attacks will proliferate. Attacks will be personalized and persistent.
2026–2028 Imminent
Trust Crisis
Without robust provenance and detection, society risks a “complete lack of trust” in digital interactions, with profound implications for democracy, commerce, and security.
2027–2028 Near-term
Regulatory Convergence
Expect convergence toward provenance mandates, rapid takedown rules, and global standards for synthetic media labeling and verification.
2026–2028 Near-term
Courts Under Siege
AI-generated deepfake evidence in legal proceedings will force adoption of standardized digital evidence authentication protocols. Judges already struggle to identify fakes.

The Single Most Important Recommendation

Mandate callback verification through a separate pre-established channel for all wire transfers and sensitive actions — regardless of how convincing the video or audio appears.

⚠ Mandate Callback Verification

This single procedural control would have prevented the $25.6M Arup loss and the majority of deepfake-enabled financial fraud.