FinCEN Issues Alert on Deepfake Fraud Schemes in Financial Services
All Fintech
AML
November 30, 2024
The Case
On November 13, 2024, the Financial Crimes Enforcement Network (FinCEN) issued FIN-2024-Alert004 to help financial institutions identify fraud schemes associated with the use of deepfake media created with generative artificial intelligence (GenAI) in response to increased suspicious activity reporting.
“Deepfake media” is synthetic content that uses artificial intelligence/machine learning to create realistic but inauthentic videos, pictures, audio, and text to circumvent identity verification and authentication methods.
Regulatory Implications
FinCEN's alert emphasizes the growing threat posed by generative AI in financial fraud. Financial institutions must be vigilant in identifying and mitigating the risks associated with deepfake media. Key implications include:
Increased Fraud Complexity:
Generative AI allows fraudsters to create highly realistic fake identities, making traditional verification processes less effective. This sophistication increases the risk of fraudulent account openings and transactions.Compliance Expectations:
Financial institutions are expected to enhance their identity verification, authentication, and due diligence controls. Ignoring or inadequately addressing these risks could cause regulatory scrutiny and potential penalties for non-compliance.Evolving Red Flags:
FinCEN’s alert outlines specific indicators for detecting deepfake-related fraud. Financial institutions must integrate these red flags into their monitoring systems and train staff to recognize these evolving threats.
Practical Guidance for Firms
Financial institutions can take the following steps to address the risks associated with generative AI and deepfake media:
Update Identity Verification Procedures:
Incorporate additional checks, such as live verification processes or biometric verification, to validate customer identities.Implement Phishing-Resistant Multifactor Authentication (MFA):
Use advanced MFA methods to reduce the risk of compromised authentication processes.Integrate Deepfake Detection Tools:
Deploy commercial or open-source deepfake detection software to flag potentially fraudulent images, videos, and text.Monitor for Red Flags:
Train staff to identify the red flag indicators outlined by FinCEN, such as inconsistencies in identity documents or unusual transaction patterns.Conduct Targeted Risk Assessments:
Assess current controls for vulnerabilities to generative AI-based fraud and make necessary adjustments to strengthen defenses.Enhance Staff Training:
Provide regular training on emerging threats related to generative AI and the use of deepfake media in fraud schemes.
InnReg helps financial institutions enhance their fraud detection frameworks and adapt identity verification processes to counter emerging threats from generative AI. Our expertise supports firms in integrating advanced detection tools such as Regly and refining compliance controls to mitigate these evolving risks.
All Fintech
The SEC has recently taken a series of enforcement actions against financial firms for failing to maintain and preserve electronic communications, particularly those conducted through off-channel methods like personal devices.
Broker-Dealers
Cash sweep programs, which automatically transfer uninvested cash in brokerage accounts to higher-interest accounts, are facing increased scrutiny from regulators like the SEC and FINRA, and investors.
Broker-Dealers
The North American Securities Administrators Association (NASAA) is requesting public comments on proposed revisions to NASAA’s broker-dealer conduct rule entitled Dishonest or Unethical Business Practices of Broker-Dealers and Agents (“Conduct Rule”).