Code signing has been a popular topic over the last fortnight.

  • Yahoo was quick to patch an embarrassing key management error with the signing key used in their new Axis browser extension for Chrome that was discovered by Security blogger Nik Cubrilovic .
  • Microsoft has published [2] a security advisory revoking trust in a number of digital certificates that may have been abused to sign parts of the recently discovered “Flame” malware.

The use of code signing technology is an essential tool in helping to establish the trust we now demand from the Internet of things. It’s worth taking a moment to consider the potential impact when there’s a loss of control of a code signing key or certificate.

Code signing has become popular in concert with the continuing rise in the numbers of Internet connected devices. Devices that are now routinely updated over the Internet range from smartphones, tablets, TVs, PCs, games machines, routers and industrial control equipment. Upgrades can be anything from a new operating system releases to new applications or application plugins. The rise of the “app store” has further increased the range and number of applications that are downloaded over the Internet with end users giving little regard to the author’s credentials.

Applying a digital signature to an upgrade file (often called code signing), is the most cost effective way to block malicious upgrades, but this approach relies on the publisher protecting their signing key from loss, misuse and theft. In the case of open platforms where third parties sell application software, the platform vendor also has to act as a certificate authority and issue digital certificates and associated security policies to trusted software vendors.

Merely requiring code to be signed does not ensure security. If a code signing key is lost it may become impossible to publish any further upgrades for existing smart devices. If a key is stolen or uses a weak algorithm an attacker may be able to sign a malicious upgrade that either steals sensitive data or render potentially millions of devices permanently inoperable on a scale we’ve yet to experience. Keys should also be used for just a single purpose to help to prevent misuse and simplify revocation.

However the effective management of code signing keys can be difficult since there’s an implicit tension between the requirement to protect a key from loss and the requirement to ensure key remains confidential. Fortunately the type of key used to sign apps or operating systems can easily be protected in a dedicated key management device called a Hardware Security Module (HSM). HSMs work by automating otherwise difficult key management tasks to ensure code signing processes remain effective.

A well designed HSM will help to secure the integrity of a code signing operation in a least 3 ways:

Simplify key backup and archival to ensure keys can never be lost

Provide independently certified protection against accidental or malicious key theft

Enforce dual control over code signing procedures to protect against unauthorised use of the code signing keys

In the case of the Axis key loss; this human error was possible because the signing key was stored in a text file rather than an HSM. Where an HSM is used keys are simply never exposed in this way and therefore human error is far less likely to have any serious impact.

Platform vendors are increasingly seeking to differentiate their products by promoting the security technology that’s integrated into their products. It seems likely we’ll now see platform vendors publishing stronger mandatory security policies for application vendors as they learn from the recent failures of other large scale PKIs (think DigiNotar), and work to stay one step ahead of increasingly sophisticated attacks.

[2]:https://technet.microsoft.com/en-us/security/advisory/2718704