Back in the 1990s, asymmetric cryptography and Public Key Infrastructure (PKI) using x.509 certificates was a pretty hot topic in security. This technology provided great promise for distributed systems, e-commerce, and digital cash.Â And, as is common with newer, exciting technologies, it became the hyped up answer to everything wherever cryptographic key management was required. Academic and near-religious arguments ensued about the advantages of asymmetric key systems (ex. x.509 certificate based) versus symmetric key systems (ex. Kerberos).Â Pundits on both sides argued the need (or not) of online trust services for key distribution and revocation. Arguments ensued about the computational cost of performing asymmetric crypto operations vs the efficiency of symmetric crypto for similar strength key values. Large enterprise deployments of Kerberos became common when Microsoft adopted Kerberos for securing Active Directory based deployments. Meanwhile, some security-consciousÂ organizationsÂ (usually spearheaded by government and large financialÂ organizations) started deploying PKI for smartcard authentication, secure email, digital signing, and timestamping. And, on a larger scale, we saw the adoption of PKI for SSL/TLS transport layer (TCP) communicationsÂ with web servers and for network layer (IP) security for VPNs.Â We also saw hybrid asymmetric/symmetric innovations that would leverage the advantages of both PKI and Kerberos – again, much of this deployment was done at scale due to Microsoft’s adoption and deployment with Active Directory (e.g. smartcard based authentication for Kerberos systems is part of the Microsoft Windows infrastructure). As with many new and excitingÂ technologies, once there is more general understanding and common deployment use cases, the technology and associated hype fades into the background.Â This seemed to be the case with PKI.
Before I go and make the case for us all to get excited about PKI once again, I want to briefly discuss why PKI was so exciting in the 1990s and what some of the challenges were in seeing moreÂ wide-scaleÂ adoption of PKI.Â PKI (using this term as shorthand for “asymmetric key systems”) has always had the advantage of being able to support highly distributed, loosely-coupled trust establishment and management. This comes from the PKI characteristics of hierarchical scale, long-lived keys, and no need for an online service to be involved at the time of trust establishment (note that this last statement is loaded with nuance when you consider the need for key revocation). As is so often the case with newer technologies, the excitement usually surrounds the core of the new technology (in this case, the asymmetric characteristics of a public/private key pair in which one key is the inverse of the other) – asymmetric crypto using RSA or Diffie-Hellman algorithms was cool, and these algorithms had much promise once some key issues could be addressed. So, what were those key issues? A huge issue was the high computational cost, and the industry had no low-cost solutions. Luckily, problems of computational cost have been easily addressed by Moore’s Law, and now the hardware cost to perform these costly operations has dramatically decreased, to the point of being a non-issue. Another issue was that it is difficult to retro-fit newÂ infrastructureÂ technologies into existing solutions – it is very hard to rationalize the cost of re-architecting anÂ applicationÂ or system to use PKI. This is a generic reason that we saw the desire for overlay security solutions (e.g. network based perimeter security) to address legacy (meaning anything already developed/deployed) applications or systems. ApplicationsÂ were typically coded, encased in some infrastructure for deployment, and then the deployment was secured as an afterthought (meaning that security was not typically built into an application as a core characteristic). The other main barrier to PKI adoption and wide-scale embedding of PKI has been the lack of expertise to deploy and effectively manage a PKI. Let’s face it, PKI can beÂ complicated. That said, I would suggest that ability to simplify deployment and management is one of the most important aspects of the maturation of any technology at scale. If you think about why Microsoft Windows environments constitute the broadest adoption and deployment of Kerberos and smartcard/Kerberos combinations, it is easy to see how the integration into a common management infrastructure (Active Directory) made it possible.
So, what has changed and why are things going to be different? First, I already made the point about Moore’s Law and how that has mitigated the issues associated with computational cost – this one was just a matter of time – and we’re there. What is so exciting is that the major trends that everyone has been discussing with such fervor over the past years are not only creating new opportunities for new technologies and businesses, but they are also breathing new life into technologies that have just been waiting for their time to come – PKI is one such technology. Let’s recap these major trends – how about if I just sum it up as mobility and cloud? In that context, what has happened in a very macro sense is that the interactions between resources and the consumers of resources have changed. I want to repeat that point – it is very important: the interactions between resources and the consumers of resources have changed. Look at the end user or end user device for example.Â This is just one example of a consumer of resources. At one point in the past, the end user interaction was limited to a single person at a terminal, connected over a wire to a server running an application. This has all changed with mobile devices, with the empowerment of a much broader set of people, and with wireless and roaming technology.Â Look at the nature of servers and applications – these are no longer delivered from a server in a datacenter. Resources may be delivered by servers running in a datacenter, by a hosted environment or provided by a SaaS service. Resources have also become consumers of resources – applications call theirÂ applications. The interactions have changed, and they have changed such that these interactions are much more distributed, dynamic, and ephemeral – the interactions are not limited by location, they may occur at any time at any pace, and these interactions come and go. What else is happening? The nature of application development and delivery is changing. As I stated earlier, it used to be that applications were developed, wrapped in infrastructure and deployed within some controlled perimeter. This is no longer the case. Some folks refer to this as the application economy – everything is an app; got an app for that? Fundamentally, what is happening is that the separation of application development,Â infrastructure, and deployment is dissolving – the process of developing and deploying applications is evolving to become moreÂ integratedÂ and streamlined – for shorthand, let me loosely refer to this as DevOps.
OK, so things have changed: 1) computational cost has dramatically decreased 2) interactions between resources and consumers of resources are much more distributed, dynamic, and ephemeral and 3) the nature of application development and delivery is becoming much more integrated. At this point, you may be asking yourself, “What does this have to do with PKI?” Well, let’s review the advantages of PKI and the barriers that were present in the past to fully realizing these advantages. PKI is very good for distributed, dynamic deployment models, but adoption has been affected because of computational cost, impediments to application/solution integration, and complexity ofÂ effectivelyÂ managing a PKI (especially since issuesÂ involvingÂ such critical security infrastructure can have serious ramifications). So now, when we look at our current environment and the brave new world brought on by mobility and cloud (including the Internet of Things), we see anÂ explosion of distributed devices and resources in various forms; low-cost, powerful compute power; and an era of application creation and innovation that requiresÂ foundationalÂ security to be built in – this is exactly the mix that makes the basic tenets of PKI, not onlyÂ attractiveÂ but, necessary. The one item that will limit the effectiveness and security of this brave new world is the ability to encapsulate the complexity of deploying and managing the critical foundational security. Now, obviously, I work for Entrust Datacard, so I may have a bias, but let me qualify this bias by saying that the reason I recently joined the Entrust Datacard team is because this is the team that spearheaded the PKI revolution in its infancy, and more importantly, this is the team that is able to address the most difficult part of the security challenges that I’ve just been describing. The industry is at an inflection point where technology and market conditions have left just one gap, and that is the knowledge, expertise to provide sage guidance, and effectively deploy, manage, and hide the complexity ofÂ foundationalÂ security required for the brave new world. This gap cannot be filled by just writing code or reading a book – this requires years of practice and experience in the real world. So, a parting thought, when you think about how you will take advantage of the new and great opportunities in front of us, how are you going to address the most critical, foundational security gaps?