FinCEN Issues Alert on Deepfake Fraud Schemes in Financial Services
All Fintech
AML
November 30, 2024
The Case
On November 13, 2024, the Financial Crimes Enforcement Network (FinCEN) issued FIN-2024-Alert004 to help financial institutions identify fraud schemes associated with the use of deepfake media created with generative artificial intelligence (GenAI) in response to increased suspicious activity reporting.
“Deepfake media” is synthetic content that uses artificial intelligence/machine learning to create realistic but inauthentic videos, pictures, audio, and text to circumvent identity verification and authentication methods.
Regulatory Implications
FinCEN's alert emphasizes the growing threat posed by generative AI in financial fraud. Financial institutions must be vigilant in identifying and mitigating the risks associated with deepfake media. Key implications include:
Increased Fraud Complexity:
Generative AI allows fraudsters to create highly realistic fake identities, making traditional verification processes less effective. This sophistication increases the risk of fraudulent account openings and transactions.Compliance Expectations:
Financial institutions are expected to enhance their identity verification, authentication, and due diligence controls. Ignoring or inadequately addressing these risks could cause regulatory scrutiny and potential penalties for non-compliance.Evolving Red Flags:
FinCEN’s alert outlines specific indicators for detecting deepfake-related fraud. Financial institutions must integrate these red flags into their monitoring systems and train staff to recognize these evolving threats.
Practical Guidance for Firms
Financial institutions can take the following steps to address the risks associated with generative AI and deepfake media:
Update Identity Verification Procedures:
Incorporate additional checks, such as live verification processes or biometric verification, to validate customer identities.Implement Phishing-Resistant Multifactor Authentication (MFA):
Use advanced MFA methods to reduce the risk of compromised authentication processes.Integrate Deepfake Detection Tools:
Deploy commercial or open-source deepfake detection software to flag potentially fraudulent images, videos, and text.Monitor for Red Flags:
Train staff to identify the red flag indicators outlined by FinCEN, such as inconsistencies in identity documents or unusual transaction patterns.Conduct Targeted Risk Assessments:
Assess current controls for vulnerabilities to generative AI-based fraud and make necessary adjustments to strengthen defenses.Enhance Staff Training:
Provide regular training on emerging threats related to generative AI and the use of deepfake media in fraud schemes.
InnReg helps financial institutions enhance their fraud detection frameworks and adapt identity verification processes to counter emerging threats from generative AI. Our expertise supports firms in integrating advanced detection tools such as Regly and refining compliance controls to mitigate these evolving risks.
Blockchain
On December 30, 2024, the US Department of the Treasury and the IRS issued final regulations focused on decentralized finance (DeFi) platforms and their role in digital asset transactions.
RIAs
The Securities and Exchange Commission announced charges against nine investment advisors and three broker-dealers for failures by the firms and their personnel to maintain and preserve electronic communications in violation of recordkeeping provisions of the federal securities laws.
RIAs
The SEC’s order finds that, from at least October 2018 until January 2022, an investment advisory firm stated in its offering materials and other documents provided to prospective and existing private fund investors that it was voluntarily complying with AML due diligence laws despite those laws not applying to investment advisors.