Skip to main content

Building Your Digital DNA: PKI and Cryptographic Disruptors

Oct

21

2016

Time to read

Read so far

Written by: 

Entrust

Time to read

Written by: 

Almost fifteen years ago, I was teaching classes in application security at a large financial institution.  When we got to the part about cryptography, we talked about the importance of not writing your own, of using standard, tested implementations and using proven encryption and hashing algorithms. Some of the examples listed were MD5, SHA-1 and RC4.

Fast forward to today: None of those algorithms are considered secure anymore. Since the mid-late 2000s, researchers have demonstrated several vulnerabilities in MD5. The IETF has prohibited RC4 for use in TLS. More recently, you may have dealt with the impact of SHA-1 being removed as an acceptable algorithm in many standards, including PCI and CAB Forum.

I don’t mention this to criticize those algorithms or the recommendations we made in the application security classes. At the time, those were some of the most tried and true algorithms out there, and using them was considered good practice. On the public key side of the house, while I don’t remember what size keys we recommended at the time, I am absolutely sure that they were much smaller than the 2048 bit or 4096 bit RSA keys that are common today.

Cryptography evolves, often rapidly. New attacks and advances in computing power mean that the algorithms and key sizes that were standard a decade ago don’t cut it anymore. Chances are, the algorithms and key sizes that are standard today won’t cut it in 2025 or 2030. If you are involved in securing a product or network, you need to assume that your cryptographic choices are not static; they will have to change. And we all need to be prepared for that.

I was thinking about this a few weeks ago while reading Gartner’s report on PKI, where they called out challenges like managing older hashing algorithms (specifically SHA-1), increasing key sizes and the advancement of quantum computing. None of these issues are specific to enterprise PKI.  Across the industry, organizations are struggling to migrate older applications from SHA-1 to SHA-2, not just to support their PKI infrastructure but to remove an insecure hashing algorithm from their environment. Increasing key sizes are a challenge for certificates that might be deployed in constrained environments, and for any other constrained environment that relies on public and private keys. Finally, advances in quantum computing have many in the industry developing new public key algorithms that will not be susceptible to quantum cryptanalysis. As new algorithms become standard, many developers will need to learn how to incorporate these new algorithms and remove the older ones.

A few years ago, as a particular cryptographic standard was being finalized, a friend asked me if they really needed to worry about implementing it. Couldn’t they just stick with what they had?  Unfortunately, history shows that algorithms don’t last. The algorithms that my friend was using are solid today. Will that still be the case in ten or fifteen years? I can’t say for certain, but if I look at the history of cryptographic algorithms, I’d bet against it. If we don’t implement the next generation of algorithms today, we risk being caught flat footed in the future, when cryptanalysts discover vulnerabilities in existing algorithms. So, yes, they had to implement the new standards.  Maybe not tomorrow, but sooner rather than later.  It’s just good practice.

For a cautionary tale, look at the RADIUS standard, still used by many vendors and enterprises for network access. When RADIUS was developed, the protocol specified MD5 for password hashing. This worked in the 1990s, when the protocol was first developed, but as I’ve already mentioned, researchers have since found several vulnerabilities. However, because MD5 was built into the standard, organizations find themselves building additional security layers on top of the RADIUS protocol to protect passwords. In 2013, the IETF addressed this issue in RFC 7696, requiring protocols to be able to change algorithms over time, without changing the base protocol specification.

The best advice I can give someone is to design for change. Assume the algorithms and key sizes you use today will need to be swapped out in the future, and design that flexibility in. Cryptographic changes are inevitable; if we prepare for them, they will be less disruptive.