Skip to main content

Generative AI Marks the End of Cybercrime Amateur Hour

Nov

25

2024

Time to read

Read so far

Written by: 

Jenn Markey
  &  
Rohan Ramesh

Time to read

Written by: 

 & 
AI image of a woman

From deepfakes and biometric fraud to nation-state attacks and cybercrime-as-a-service, the threat landscape continues to intensify. Technology, especially AI, is arming an increasingly savvy and sizable cohort of cybercriminals, marking the end of cybercrime amateur hour.

Cybercrime is already a big business estimated at over $9.5 trillion in 2024 and was compared to the world’s third-largest economy in a recent Bank of America investor note. From data breaches and ransomware to deepfakes and synthetic identity fraud, cybercrime is poised to further intensify and accelerate, in large part due to AI.

Generative AI – A Cybercrime Treasure Chest

Generative AI, or GenAI for short, employs machine learning to create new credible content including text, imagery, audio, and video. Many organizations already use GenAI tools like Microsoft Copilot as an office productivity booster. However, GenAI tools are also being increasingly used by cybercriminals.

Indeed, today’s bad actors have access to a cybercrime treasure chest with GenAI tools that create very convincing phishing emails and hyper-realistic deepfake multi-media content, along with sites that specialize in the creation of credible fake documents. As a result, digital document forgeries have skyrocketed by 244% in 2024 and deepfakes now account for 40% of all biometric fraud, according to our 2025 Identity Fraud Report.

Specific types of GenAI-enabled attacks include:

  • Deepfake creation via face-swap apps and other online software tools with the intention to open new fraudulent accounts or gain unauthorized access to existing accounts
  • Voice spoofing to create new or replicated voices helping to bypass vocal recognition software
  • Text and image generation, including phishing email templates
  • Automated data scraping of available sources to facilitate synthetic identity creation or credential stuffing attacks
  • Bots for credential stuffing where bots use stolen account credentials to gain unauthorized access to user accounts or to automate the submission of loan or credit card applications using stolen or synthetic identities

Cybercriminals and Fraudsters Embrace “As-a-Service”

From fraud to ransomware to phishing and beyond, cybercriminals are embracing as-a-service models to up their own game and that of others by sharing known vulnerabilities and threat tactics over the internet. Plus, with fraud-as-a-service (FaaS), ransomware-as-a-service (RaaS), and phishing-as-a-service (PhaaS), savvier bad actors are profiting from what they know by selling this know-how to less sophisticated scammers. Also, the use of cloud-based infrastructures helps bad actors of all skill levels evade detection.

All of this is increasing both the overall number of attacks and the volume of sophisticated attacks. Here are some of the more prevalent cybercrime-as-a-service offerings:

  • Fraud-as-a-Service (FaaS) – Global fraud scams and bank fraud schemes totaled $485.6B in 2023, an all-time high, so it’s no surprise that more bad actors are eyeing this attractive target. FaaS providers typically operate on the dark web, providing amateurs with phishing kits, stolen credit card information, account takeover services, and more.
  • Ransomware-as-a-Service (RaaS) – Ransomware is involved in 20% of all cybersecurity incidents, according to IBM. With RaaS, ransomware developers sell ransomware code or malware to amateur hackers called “affiliates.” LockBit, one of the most prevalent and damaging ransomware strains, was spread via RaaS.
  • Phishing-as-a-Service (PhaaS) – An Egress survey of cybersecurity leaders found that in 2023, 94% of businesses were impacted by phishing attacks. GenAI – combined with as-a-service delivery models – can produce phishing campaigns that are highly credible, targeted, scalable, and ultimately effective. It comes as no surprise then that 95% of those cyber leaders surveyed are stressed about email security.

R/evolution of Cybercrime Pre, Peak, and Post Pandemic

Whether it’s how we work, buy groceries, receive medical care, or interact with businesses and our government, it seems the last five years can be grouped into pre-, peak-, and post-pandemic eras. The onset of the pandemic was rocket fuel for digital transformation across sectors, injecting new opportunities and risks for all.

Five years ago, most document fraud was performed on a physical identity document rather than the digital forgeries we see today. During the pandemic, there was a marked uptick in fraud as more and more businesses transitioned to online – in some cases overnight. While fraud volumes were at an all-time high during this period, it was before GenAI went mainstream, meaning tactics were much less sophisticated. Fast forward to today and fraud rates have fallen back to pre-pandemic levels; however, GenAI has increased the sophistication, scale, and efficacy of the attacks.

A new dimension in our post-pandemic world is the rise of geopolitical tensions and nation-state attackers. With associated economic sanctions, some jurisdictions, including North Korea, are believed to be using fraud as a source of revenue.

Combating Would-Be Cybercriminals

From fraud to ransomware to phishing and beyond, AI can be a very powerful tool in an organization’s security toolkit to anticipate and ward off cyberattacks. But it needs to be part of a larger Zero Trust strategy. This includes implementing strong identity and access management controls by enforcing Zero Trust principles, mandating phishing-resistant multi-factor authentication (MFA) everywhere, and leveraging PKI certificates to verify and encrypt communications.

Plus, organizations have a responsibility to employees, customers, and other stakeholders to inform and educate them on the use of good cyber hygiene practices – including MFA, strong passwords, encryption, and the protection of personally identifiable information (PII).

Some specific cybersecurity applications of GenAI as part of a larger Zero Trust strategy include:

  • Enhanced biometrics: GenAI can be used to improve the efficacy of document and biometric authentication solutions to help prevent the creation and use of deepfakes and synthetic identities at the point of onboarding and throughout the user lifecycle.
  • Adaptive threat detection: GenAI tools can be trained on historical data to proactively identify phishing attacks and other cyber threats. Plus, GenAI can analyze previous vulnerabilities and cyberattack patterns to help predict future threats.
  • Threat simulation and training: GenAI models can be trained to simulate cyber threats and attacks in a controlled environment to better equip cybersecurity teams to identify, respond to, and mitigate cyber risks.

To limit the exposure of “Harvest Now, Decrypt Later” attacks on long-life data like financial information and government intelligence, organizations would also be wise to start their journey to post-quantum cryptography (PQC) sooner rather than later.

Employing a Zero Trust strategy with a robust and informed identity security posture – and leveraging GenAI – will make it far more difficult for would-be cybercriminals to compromise systems, steal data, and profit from their efforts. Stay vigilant and prepared by adopting a Zero Trust strategy with AI-powered, identity-centric security.

jenn-markey-headshot
Jenn Markey
Advisor, Entrust Cybersecurity Institute
Jenn Markey is a content advisor and thought leader with the Entrust Cybersecurity Institute. Her previous roles with Entrust include VP Product Marketing for the Payments and Identity portfolio and Director Product Marketing for the company’s Identity and Access Management (IAM) business. Jenn brings 25+ years of high tech product management, business development, and marketing experience to the Entrust Cybersecurity Institute with significant expertise in content development and curation.
View all of Jenn's Posts
Facebook