Cyberattacks were forecast to have cost the global economy $8 trillion USD in 2023, and this number is forecast to grow to $10.5 trillion by 2025. In many cases, the scale and effectiveness of these attacks are being fueled by artificial intelligence (AI), especially deepfakes. Our Identity Fraud Report 2024 shows a 31x increase in the volume of deepfake attempts between 2022 and 2023. Facing this intensifying threat landscape, governments and enterprises around the world are scrambling to implement Zero Trust strategies to improve their cyber-risk posture and resilience. As evidence of the importance of Zero Trust to secure the organization, just 18% of respondents in the 2024 State of Zero Trust & Encryption Study sponsored by Entrust say that Zero Trust is not a priority at this time.

While previous Zero Trust journeys may have sputtered due to the limits of existing technology and a rigorous framework, AI is a game changer. On the surface, Zero Trust and AI may appear to be polar-opposite concepts with the former framed by the strict “Never Trust, Always Verify” principle, while the latter is characterized by both the promise and fear of the great unknown. However, much like “opposites attract,” Zero Trust and AI are natural partners.

An AI-Powered Approach to Zero Trust

With the biggest challenge to implementing Zero Trust being cited as a lack of in-house expertise (by 47 percent of respondents) in the Entrust report, it becomes apparent that additional resources are needed. Zero Trust demands constant vigilance and that’s where AI’s ability to discover, classify, and process large volumes of distributed data comes in. AI can literally speed up the detection of and response to cyberattacks.

However, bad actors may try to poison or otherwise manipulate the training data to blunt the effectiveness of such AI systems. So, Zero Trust and AI are somewhat akin to the “which came first, the chicken or the egg” metaphor. AI-enhanced visibility and decision-making can increase Zero Trust effectiveness, but Zero Trust is needed to protect the integrity of the data being used to train the AI model.

CISA’s Zero Trust Maturity Model (ZTMM) 2.0 foreshadowed this emerging relationship between Zero Trust and AI with a significant focus on the modernization of the Identity and Devices domains to improve an organization’s cyber-risk posture. Some specific examples include:

  • Identity Verification – Establishing and maintaining trusted identity is a critical component of any Zero Trust strategy, yet this is becoming harder and harder with AI-generated fakes. This is where AI-enabled biometric identity verification can help level the playing field to identify deepfakes in real time.
  • Adaptive Authentication – AI-enabled authentication can dynamically adjust privileges to respond to real-time risk factors like device reputation, geolocation, and behavioral biometrics. This AI-enabled approach aligns directly with Zero Trust’s “least privilege” construct.
  • Behavioral Analytics and Pattern Recognition – AI models that continuously learn and adapt to emerging patterns are ideal to analyze large volumes of distributed data to flag anomalies and potential threats. With this AI-enabled approach, Zero Trust’s “Never Trust, Always Verify” is more easily attainable.

So, there you have it: Zero Trust and AI are inextricably linked for organizational success and safety. With strict access controls, comprehensive visibility, and continual monitoring, Zero Trust lets organizations take advantage of the power of AI, while also helping to neutralize AI risks.

Learn more about Entrust’s identity-centric Zero Trust solutions.