Skip to main content

Biometrics: A Flash Point in AI Regulation

Apr

22

2024

Time to read

Read so far

Written by: 

Jenn Markey
  &  
Aled Lloyd Owen

Time to read

Written by: 

 & 
img-csi-april-blog

According to proprietary verification data from Onfido (now a part of Entrust), deepfakes rose 3000% from 2022 to 2023. And with the increasing availability of deepfake software and improvements in AI, the scale and sophistication of these attacks are expected to further intensify. As it becomes more difficult to discern legitimate identities from deepfakes, AI-enabled biometrics can offer consumers, citizens, and organizations some much-needed protection from bad actors, while also improving overall convenience and experience. Indeed, AI-enabled biometrics has ushered in a new era for verification and authentication. So, with such promise, why is biometrics such a flash point in AI regulatory discussions?

Like the proverb that warns “the road to Hell is paved with good intentions,” the unchecked development and use of AI-enabled biometrics may have unintended – even Orwellian – consequences. The Federal Trade Commission (FTC) has warned that the use of AI-enabled biometrics comes with significant privacy and data concerns, along with the potential for increased bias and discrimination. The unchecked use of biometric data by law enforcement and other government agencies could also infringe on civil rights. In some countries, AI and biometrics are already being used for mass surveillance and predictive policing, which should alarm any citizen.

The very existence of mass databases of biometric data is sure to attract the attention of all types of malicious actors, including nation-state attackers. In a critical election year with close to half the world’s population headed to the polls, biometric data is already being used to create deepfake video and audio recordings of political candidates, swaying voters and threatening the democratic process. To help defend against these and other concerns, the pending EU Artificial Intelligence Act has banned certain AI applications, including biometric categorization and identification systems based on sensitive characteristics and the untargeted scraping of facial images from the web or CCTV footage.

The onus is on us … all of us

Legal obligations aside, biometric solution vendors and users have a duty of care to humanity to help promote the responsible development and use of AI. Crucial is the maintenance of transparency and consent in the collection and use of biometric data at all times. The use of diverse training data for AI models and regular audits to help mitigate the risk of unconscious bias are also vital safeguards. Still another is to adopt a Zero Trust strategy for the collection, storage, use, and transmission of biometric data. After all, you can’t replace your palm print or facial ID like you could a compromised credit card. The onus is on biometric vendors and users to establish clear policies for the collection, use, and storage of biometric data and to provide employees with regular training on how to use such solutions and how to recognize potential security threats.

It's a brave new world. AI-generated deepfakes and AI-enabled biometrics are here to stay. Listen to our podcast episode on this topic for more information on how to best navigate the flash points in AI and biometrics.

jenn-markey-headshot
Jenn Markey
Advisor, Entrust Cybersecurity Institute
Jenn Markey is a content advisor and thought leader with the Entrust Cybersecurity Institute. Her previous roles with Entrust include VP Product Marketing for the Payments and Identity portfolio and Director Product Marketing for the company’s Identity and Access Management (IAM) business. Jenn brings 25+ years of high tech product management, business development, and marketing experience to the Entrust Cybersecurity Institute with significant expertise in content development and curation.
View all of Jenn's Posts
Aled Lloyd Owen Headshot
Aled Lloyd Owen
Global Policy Director, Onfido (Now Entrust)
Aled is Senior Director for Global Policy at Onfido (now Entrust). He provides strategic policy leadership to ensure Onfido remains at the cutting edge of developments in identity verification, AI, regulation and compliance. He has over a decade of experience engaging with complex emerging technology, policy, security, AI and data protection challenges as a UK government official, counsellor to the European Union and US-based academic. He is an advisory board member of the UK All Party Parliamentary Group on AI, and a Fellow of the Royal Society of Arts
View all of Aled's Posts