Identity in the Age of Deepfakes: Why This Isn’t the Fraud Apocalypse

Jun

13

2025

Time to read

Read so far

Written by: 

Olivier Koch

Time to read

Written by: 

Hand holding a phone with woman on the screen

Back in early 2023, not long after ChatGPT first launched, we wrote an article asking the question: Will ChatGPT make identity fraud easier? Our conclusion was that while generative AI tools like ChatGPT are giving fraudsters new ways to create and commit fraud, they won’t bring about the fraud apocalypse. Instead, they will change how we build fraud detection tools in the future.

So when image generation powered by GPT-4o was introduced in March, that question came up again. Is this the fraud apocalypse? Will tools like ChatGPT and deepfakes bring about the end of online identity verification as we know it? Well, not to be anticlimactic, but the answer is overwhelmingly, no. Here’s why.

Fraud evolves daily – our fraud defenses must evolve faster

Generative AI (GenAI) is now part of our daily lives. And whether we like it or not, this means fraudsters will try to use it to their advantage – whether that’s generating digitally rendered images of documents from scratch or leveraging face swap apps to create deepfake videos. What used to require specialist tools and very advanced skills has now become a commodity that any programmer can leverage with a laptop and an off-the-shelf graphics processing unit (GPU). With the right amount of prompting, both open-source and proprietary GenAI models are now able to generate highly realistic identity samples that could fool even the trained human eye.

ChatGPT generated IDs

These identity documents were generated with ChatGPT. They never existed as actual physical documents and could fool an untrained human eye.

While the accessibility of this technology means fraudsters have new avenues at their disposal, it’s no secret that fraudsters have always adapted their tactics to try and find new loopholes. For the record, it’s also not the first time that fraudsters have had fake, digitally generated documents at their disposal. Online document templates or samples have been easily accessible online for years. Combined with Photoshop, fraudsters were able to digitally reproduce images of real-life documents. Last year, an underground online service called OnlyFake that sold identity documents it claimed were generated by AI created a buzz of headlines.

With years of experience in the identity verification sector, we can confidently say that this isn’t the first (and won’t be the last time) we see fraudsters using new technologies to their advantage. This is why our jobs as fraud preventers is never done, and we are continually developing our products to respond to new threats.

Data doesn’t lie: Why early detection and continuous monitoring are key to fraud prevention

Fraudsters use a trial-and-error approach. They first try a new tactic at a small scale before rolling it out at a large scale. By continually monitoring our data, and the types of attacks our customers see, we can catch these types of fraud early on. And while identifying these new attack vectors might sound like a needle-in-the-haystack problem, this is where we can use AI to our advantage. Machine-learning based algorithms are designed to detect fraud anomalies in the massive amount of genuine data.

And the data never lies. Over the last few years, we’ve seen a shift from physical counterfeit fraud (a physical, fake version of an identity document) to digital forgeries (digitally manipulated documents). The data suggested fraudsters were moving away from physical fraud to digital manipulation, allowing us to get ahead and proactively begin work to skate where the puck is going by building our solution to lean more to the digital fraud space.

Data insights from Entrust's 2025 Identity Fraud Report

Shift from physical document fraud to digital document fraud. Data insights from Entrust’s 2025 Identity Fraud Report.

Similarly, we saw our first deepfake attempt back in 2021 (see the image below – we call this "deepfake patient 0") and built our flagship biometric verification product, Motion, anticipating such deepfake attacks at scale. It wasn’t until 2023 that we saw this type of attack really take off, with a 3,000% increase in deepfake attempts. The future proved us right, but due to early detection and continuous monitoring we were able to stay one step ahead.

Deepfake Patient 0

"Deepfake patient 0," first spotted in 2021. We built our flagship biometric verification product (Motion) anticipating such deepfake attempts at scale.

Fighting AI-generated fraud with... AI

As realistic as the AI-generated documents and deepfakes may be to the human eye, they leave certain traces that are visible to machine learning engines. For one thing, they often look too perfect and don’t mirror a realistic experience of someone taking a photo of their identity document, or a selfie, during an identity verification process. This type of fraud also often leaves small artifacts at the pixel level. Pixels are the tiny areas of illumination on a display screen that make up a complete image and can indicate something is amiss.

A well-maintained supervised learning engine trained on millions of samples of identity documents and faces can spot such telltale signs. The models consider these samples to be out-of-distribution and are therefore able to flag them as fraud.

Embedded Fraud Samples

Our models learn visual embeddings to separate genuine samples from fraud samples. (a) Pre-trained, off-the-shelf embeddings. (b) Fine-tuned embeddings. Notice how fine-tuned embeddings consider deepfakes as fraudulent even when deepfakes are not seen during training.

Preventing injection attacks can stop AI-generated fraud at the source

Fraudsters are using a specific type of attack to get AI-generated images and deepfakes into the capture process during identity verification. During the process, users take a photo/video of an ID and their face live, so fraudsters must "inject" the fraudulent samples into the capture process. These are called injection attacks. One way that fraudsters manipulate video or photo feeds and "inject" fraud is to replace a real hardware camera (like the one we have on smartphones) with software, allowing them to use any source image, video or recording.

One essential layer of defense is to look at the application rather than the applicant, or end-user. For example, what insights can you get from the phone itself? It will become even more difficult for fraudsters to bypass security features of smartphones as manufacturers raise the bar. Additional layers of defense come in the form of device-level intelligence and signals. During the verification capture process, device intelligence monitors specific signals related to the device and can pinpoint if a fraudster is using software to bypass the camera feed or is instead showing other revealing signs that suggest injection attacks.

Deepfakes are realistic, sophisticated fraud – making it harder for fraudsters to "hill-climb"

"Hill-climbing" is a technique used by fraudsters to continuously improve their attack based on rejections from the system. It’s useful when fraudsters can build a mental model of what is wrong with their failed attempt. As mentioned earlier, fraudsters use a trial-and-error approach when submitting fraudulent attempts. They first test a small-scale attack, building on it each time before committing a large-scale "optimized" attack.

But counterintuitively, generating highly realistic deepfakes makes hill-climbing very hard. When highly realistic deepfakes are rejected, then where do they go next? There’s very little they can do to build on an already sophisticated, high-quality deepfake.

Hill-climbing also leaves certain indicators that we as identity verification providers are able to spot. To continuously improve their attack, fraudsters need to submit a large number of attempts using many variants of their method; this is as noticeable as a criminal knocking on a door repeatedly before trying to break into a building. Monitoring for similar, repeat fraud attempts can catch this type of fraud across both documents and faces by flagging when variations of the same identity documents, or the same face, enter a business's onboarding system.

GenAI is getting more sophisticated – what does that mean for the future?

Fraud is becoming a scaled, organized business. Just do the math: If a bitcoin company offers a $50 voucher for an account opening, opening 1,000 fake accounts yields $50,000 in a few hours. It’s a lucrative avenue for fraudsters.

Targeted account takeovers, or mule accounts, driven by social engineering and coercion scams are also becoming attractive. Recent data from the U.S. Federal Trade Commission found that the biggest scam losses happened by bank transfer or payment. In 2024, people reported losing $2 billion through bank transfer or payment scams, followed by cryptocurrency scams at $1.4 billion.

With Gen AI tools like ChatGPT still in their infancy, naturally many are concerned about what this will mean for fraud long term. It’s likely that AI-generated government IDs as well as deepfaked biometrics will become even more prevalent. Why? Deepfake generation has become a commodity, highly accessible to the masses and easy to produce at scale. By comparison, some other fraud vectors are harder to mass produce (although it’s important to note that we will still see them, whether that’s synthetic identities, 3D masks, or physical document counterfeits).

Protecting identities in the age of deepfakes

While fraud continues to evolve, so do fraud prevention capabilities. For businesses concerned with fraud detection in 2025, they should focus on working with identity verification providers that offer a multi-layered, constantly evolving protection.

Some of the best practices for preventing AI-generated fraud and deepfakes include:

  • Early detection and continuous monitoring: By continually monitoring the data, and the types of attacks our customers see, we can catch emerging fraud attack vectors early on and adapt and build products in anticipation of future attacks.
  • Dedicated machine learning models: A well-maintained supervised learning engine trained on millions of samples of identity documents and faces can spot telltale signs of fraud.
  • Robust on-device protections: Deepfakes require an injection attack. Ensuring that the user is using a genuine device will be key to keeping them out.
  • Proprietary and constantly evolving product designs: Robust fraud detection isn’t built overnight. It takes years to develop sophisticated machine learning engines that are capable of differentiating genuine samples from fraudulent ones, as well as all the nuances that come with different types of fraud attack vectors.
  • Data moats remain vital: Measuring performance across the models requires very large volumes of data. In addition, fraudsters' early attempts are often unnoticeable at small scale. The data moat we have built over the past decade gives us a unique edge against deepfakes.

In conclusion, AI-generated fake documents and deepfakes make the headlines in the press for good reason and can be worrying from a public standpoint. But the landscape in the identity industry itself is very different. While such fraud does present challenges, we already provide strong protection against it and will continue to do so through future product development. We work hard for our customers, so they don't have to.

Explore the 2025 Identity Fraud Report

Get more insights on the latest fraud trends and explore how businesses can defend against them by downloading the 2025 Fraud Report.

Olivier Koch
Olivier Koch
Director of Applied Science

Olivier leads the Onfido AI team. Olivier has 13 years of industrial experience leading machine learning teams across various domains such as defense at Thales and online advertising at Criteo. Olivier holds a Ph.D. in Computer Science from MIT and an engineering degree from ENSTA Paris. He has published work at international venues such as CVPR, ICCV, and IJFR and holds several patents.

View all of Olivier's Posts
Facebook