Skip to main content

AI and Cybersecurity in a Critical Election Year

Jul

26

2024

Time to read

Read so far

Written by: 

Jenn Markey

Time to read

Written by: 

img-CSI-elections-blog-1200x627

With almost half of the global electorate headed to the polls this year, 2024 is a critical election year. This is not just critical in terms of shifting geopolitics, but also in terms of protecting election integrity and the institution of democracy itself. Like water treatment plants and the power grid, think of elections as critical infrastructure – essential to a nation’s security, stability, and prosperity. Consequently, they need to be protected as such. Indeed, elections have been formally classified as critical infrastructure in the U.S. since 2017, with CISA charged with their oversight.

The AI Disruption in Elections

A new concern this election cycle is the potential disruptive impact of artificial intelligence (AI). Particularly worrying is the AI-powered creation and dissemination of hyper-realistic false information – audio, video, and text — at scale, notably deepfakes. One such example occurred in the 2024 New Hampshire Democratic primary, where voters received a robocall from a fake Joe Biden discouraging them from voting, a ruse that was subsequently tracked back to an opponent’s supporter. Such fakery could be particularly damaging in the upcoming U.S. federal election, especially in several high-profile, narrow-margin swing states.

AI-Powered Threats

Other possible concerns include AI-powered phishing attacks on voters and election officials, synthetic voter identities, and public reliance on AI chatbots for electoral information. On that last note, a recent study revealed an average error rate of 27% for U.S. election-related queries across five tested chatbots, with Google Gemini 1.0 Pro answering incorrectly 43% of the time. Regarding the question of who won the U.S. 2020 presidential election, Microsoft and Google chatbots simply won’t answer. CISA’s take on generative AI in elections is that it is not likely to introduce new risks, but rather amplify existing ones.

Combating AI Threats

Combating AI threats requires an identity-centric approach through decentralized systems that encrypt and digitally sign personal data. This grants individuals full control over their identity while preventing the exposure of sensitive information. However, this should be coupled with robust systems for verifying the authenticity of digital media and combating deepfakes. Digitally signing videos, images, and documents via PKI will allow creators and audiences to verify content at scale across the web.

The Positive Potential of AI

From a “glass-half-full” perspective, there is also the potential to harness AI for improved election security, integrity, and efficiency. AI in elections can simultaneously humanize and extend candidate reach, including the dissemination of accurate information to a widespread electorate. And, while it may seem far-fetched, AI Steve ran for the U.K. Parliament, and Vic is campaigning for mayor in Cheyenne, Wyoming.

There is also the potential to leverage AI to improve the efficiency and integrity of election administration, including signature verification of mail-in ballots and validating paper ballots that are unreadable by OCR and would otherwise require manual attention.

Bolstering Election Cybersecurity

Arguably, the largest positive impact AI can and should have is bolstering election cybersecurity. AI can be a very powerful tool in an organization’s security toolkit to anticipate and ward off cyberattacks, but it needs to be part of a larger Zero Trust strategy. This includes implementing strong identity and access management controls by enforcing Zero Trust principles, mandating phishing-resistant multi-factor authentication (MFA) everywhere, and leveraging PKI certificates to verify and encrypt communications. Additionally, governments have a responsibility to voters, election officials, and candidates to inform and educate them on good cyber hygiene practices including MFA, strong passwords, encryption, and the protection of voter personally identifiable information (PII).

A Call to Action

A robust and informed identity security posture makes it far more difficult for adversaries to compromise systems, steal data, or conduct influence operations.

Regardless of whether it is elected leadership, technology, or the threat landscape, the only constant is change, and the way to secure that change is Zero Trust. Download the 2024 State of Zero Trust and Encryption Study to learn more.

jenn-markey-headshot
Jenn Markey
Advisor, Entrust Cybersecurity Institute
Jenn Markey is a content advisor and thought leader with the Entrust Cybersecurity Institute. Her previous roles with Entrust include VP Product Marketing for the Payments and Identity portfolio and Director Product Marketing for the company’s Identity and Access Management (IAM) business. Jenn brings 25+ years of high tech product management, business development, and marketing experience to the Entrust Cybersecurity Institute with significant expertise in content development and curation.
View all of Jenn's Posts
Facebook