Basic but highly scalable fraud tactics that are relatively easy to discern – like phishing – are quickly giving way to AI-generated deepfakes and synthetic identities that are both hyper-realistic and scalable.
AI-Generated Deepfakes Fueling a Fraud Arms Race
Whether it’s the combination of different attack vectors, or fraud rings sharing methodologies and offering fraud-as-a-service (FaaS), or fraudsters creating hyper-realistic deepfakes, fraud is becoming decidedly more sophisticated. One of the core drivers behind the rise of deepfake fraud is the impact of AI.
Generative AI, also known as GenAI, employs machine learning to create new content such as images, video, audio, and text. This new content is created by large language models (LLMs) that have been trained on vast volumes of data to create hyper-realistic outcomes. GenAI tools, including OpenAI’s ChatGPT and Microsoft’s Copilot, are used by many organizations today to boost employee productivity. However, GenAI tools are also increasingly being used by scammers to create more convincing phishing emails and hyper-realistic deepfakes. In some ways, and without clear and consistent regulation, AI is fueling a fraud arms race.
Some common types of deepfake scams today include:
- Family member in distress: The fraudster typically uses a deepfake voice or image to impersonate a family member in distress with a request for money or other sensitive information. Seniors are often a favored target for these deepfake scams as highlighted by the all-too-common grandparent scam.
- Investment scams: GenAI is used to produce hyper-realistic deepfake images, videos, and audio recordings of prominent business leaders and celebrities to trick unsuspecting investors out of their cash, often while having them visit a malicious website in the process.
- Fake executives: The scammer impersonates a senior company executive using an AI-generated deepfake, which can range from a simple text or email to a highly elaborate audio or video call usually with a sense of urgency. An unwitting employee is then tricked into releasing funds and/or other sensitive company information to the fraudster.
While AI has enabled more tamper-resistant documents and improved the efficacy of identity verification (IDV) processes with AI-powered biometrics, bad actors have also upped their game. Unlike physical counterfeits, which require a manual printing process, digital forgeries created with GenAI toolkits, face swaps, and online templates are easier and cheaper to produce – and to scale. Indeed, fraudsters are creating more digital forgeries than physical counterfeits for the first time ever. Our 2025 Identity Fraud Report reveals that digital forgeries now account for 57.46% of all document fraud. This represents a 244% year-over-year increase, as well as a 1,600% increase since 2021!
Deepfakes Emerge as the New Face of Biometric Fraud
Deepfakes first became a widespread attack vector in 2023, representing a particularly pernicious threat to fraud prevention and detection strategies that rely on biometric checks. A biometric check usually involves either a static photo (selfie) or a video element (video/motion) to confirm a person’s identity. A deepfake is a digital manipulation of this photo or video where a person’s face is altered to appear as someone else.
Deepfakes vary in sophistication and can be grouped into three categories:
- Face swaps: A new face is superimposed onto a target head. Sophisticated face swaps use AI to morph and blend the new face onto the target, whereas a “cheapfake” crudely pastes one face over another.
- Fully generated images: New faces or image deepfakes that are created by GenAI models.
- Lip-sync videos: The original person stays the same, but their lips are manipulated (and sometimes combined with deepfaked voices) to make it appear as though they are saying something they never said in the original video.
As AI-generated deepfakes have become increasingly realistic, IDV tactics have also become more sophisticated – including the use of more video biometric checks. However, the scammers are also getting more sophisticated, with deepfakes now accounting for 40% of all video biometric fraud attempts. Plus, it’s estimated that there is one deepfake attempt every five minutes!
Injecting Deepfakes Into the IDV Process
Injection attacks are one tactic fraudsters use to insert deepfakes – like a manipulated video or photo – into the IDV stream in an attempt to bypass know your customer (KYC) onboarding checks. There are two types of injection attacks:
- Virtual cameras: This is the most common injection attack method where fraudsters replace a real hardware camera with software, allowing them to use any deepfake source video or recording.
- Network injection: A more sophisticated method where fraudsters use code to submit the deepfake.
Previously, presentation attacks were the more popular way of introducing deepfake content into biometric checks. But as fraudsters share their knowledge and more apps reduce the technical knowhow required to produce them, virtual camera injections have become the method of choice.
Fighting Back
Faced with the increasing use of GenAI tools to increase both the volume and realism of deepfake attacks, it’s paramount for organizations of all shapes and sizes to protect themselves by detecting fraud and misinformation. Here’s how:
- Use AI to fight AI – Harness the power of AI and machine learning models that have been trained to detect deepfakes using specific fraud markers, many of which are non-visual. This helps to automate fraud prevention, adding scale and speed in the process.
- Adopt a Zero Trust strategy – AI is a powerful tool to help fight fraud, but, as this blog post points out, it becomes even more powerful when it’s part of a larger Zero Trust strategy that requires all users to be verified, authorized, and continuously validated.
- Really Know Your Customer – Stop fraudsters at the point of onboarding with AI-powered biometric identity verification to vet out deepfakes.
- Apply a layered approach – IDV checks include document, biometric, and data verification, among others. A layered IDV approach lets organizations balance the number of checks and associated level of friction with an individual’s risk profile to identify deepfakes.
- Be vigilant throughout the customer lifecycle – While onboarding is the first line of defense, you should build IDV checks into the entire customer lifecycle to help prevent deepfake account takeovers and fraudulent transactions.
Like death and taxes, increasingly sophisticated AI-generated deepfakes are inevitable. Read our 2025 Identity Fraud Report to stay ahead of scammers and keep your organization and customers safe.