The new year is always a time for reflection, and after a year shaped by AI, biometrics, and nation-state attacks there’s a lot to think about. So where are we headed from here? A utopian episode of Star Trek or a more dystopian Terminator 10 movie? The future as they say is up to us and that means reshaping trust and security for the realities of our brave new world.

The intensifying geopolitical landscape characterized by nation-sponsored attacks has made cyber the new battleground. Employing Zero Trust principles is a best practice, but not a cybersecurity guarantee in an era of AI-enhanced phishing, zero-day brokers, and malware-as-a-service. Adopting a term from U.S. Cold War simulations, red teaming is back in vogue. The red team is a pretend enemy that attempts to mount a cyberattack acting under the direction of the target organization. One such example comes from the European Central Bank, which has started to conduct vulnerability assessments and incident response tests on banks to assess their cyber resilience. Whether simulated or real, all organizations should seek to turn cyber breaches into a blueprint for future security with actionable strategies to strengthen their cyber posture.

One particular area of cyber exposure is critical infrastructure, from the power grid to water treatment plants to public service providers and beyond. We live in an “everything is connected to everything” world, making critical infrastructure a vulnerable and attractive target to bad actors. In 2024, it’s long past time to get serious about IoT and Industrial IoT (IIoT) security. As well, AI-enabled biometric identity verification of employees, partners, and customers is increasingly essential to flushing out deepfakes and keeping organizations and people safe.

There are also some encouraging signs that regulators around the globe are stepping up to the challenges ahead. Similar to data privacy, consumer protection, and digital identity legislation, Europe is on the vanguard once again with the EU AI Act. The White House has also issued the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence Executive Order and the bipartisan Schatz-Kennedy AI Labeling Act has been tabled. However, being that it’s an election year in the U.S., it’s likely that big tech will be left largely to self-regulate over the next 12 months. That said, the tech sector is stepping up, with seven technology behemoths including Amazon, Google, and Microsoft agreeing to adopt AI safeguards with the Biden administration. As well, ChatGPT maker and close Microsoft collaborator OpenAI has released its framework to mitigate catastrophic risks like using generative AI to build biological weapons, spread malware, or carry out social engineering attacks. Meanwhile, the National Institute of Standards and Technology (NIST) has been busy building out the Artificial Intelligence Risk Management Framework (AI RMF) in close collaboration with the private and public sector to continue to leverage the power of AI while mitigating the risk.

So, at the outset of 2024, I remain cautiously optimistic that the forces of good will prevail and it will be a utopian technology-fueled future.