Revocation doesn’t work: an alternate take

I’ve never been a big fan of the Certificate Authority system used by Web Browsers for HTTPS, but we can’t just throw it away just yet.  My preferred alternative I’d like to see take place is for a certificate or public key fingerprint to be stored in DNS along with the A/AAAA/SRV record for the service being accessed.  This record would be secured using DNSSEC and must be fresh.  With this, the certificate has the same lifetime as the associated DNS record and revocation happens by rotating public keys.  This eliminates the need for maintaining separate hierarchies for DNS and SSL certificates and puts revocation into the hands of the person in charge of DNS.

Unfortunately, this is not a viable option yet so we still need to maintain the existing system in place.  Adam Langley wrote an interesting article on the certificate revocation support in existing browsers.  I do agree with him that is should be improved upon, but I believe there is a better solution.  He proposed reducing the lifetime of certificates down to a matter of days as an alternative to supplying up-to-date revocation information.  The problem with this is that the CA must continually resign updated certificates for all their users which requires more computing power, but also will need methods to automate it and to distribute those updated certificates to users.  If something goes wrong, a user may be stuck with an expired certificate which is worse than the current situation.

A way to better distribute the load would be to rely on more intermediate Certificate Authorities.  Unfortunately, with current software there is no way to further restrict the authority of an intermediate CA and so they need to be controlled as tightly as the issuing CA as to who can sign with the private key.  Now, if there was a new critical extension created for CA certificates that, say, only allowed it to sign certificates for a specific domain name, then the issuing CA could delegate the task of signing SSL certificates to the site using them.  That user could then have a secure off-line machine with this restricted intermediate CA that signs a new SSL certificate daily without requiring regular communication with the CA.

Again, this idea is also not very viable as all kinds of software would need to be updated to support this new critical extension, and in lieu of proper support would be a failed SSL connection due to an unsupported critical extension.  I believe the best solution is to just improve the distribution and caching of the existing revocation information.  Certificates normally include one or more URLs of either CRLs or OCSP servers.  At least one CRL or OCSP server must be accessible to validate a certificate.

OCSP is a protocol to query the revocation status of individual certificates.  The OCSP server will either assert that a certificate is valid, revoked, or that nothing is known about the certificate in question.  This assertion has a lifetime associated with it and can be stored in a cache until either it’s life has expired or the browser decides it wants fresh data.

CRLs are much simpler, they are nothing more than a file signed by the CA containing a list of all revoked certificates, but like OCSP, it also has a lifetime stored with it and included within the CA’s signature.  Lifetimes for CRLs can be as much as 30 days or more or down to a few days or less.  A CRL only has to be signed once during it’s lifetime for all certificates issued by a CA which puts much less strain on the CA then resigning all it’s issued certificates.  Also, since a CRL is a simple file, it can be distributed in many different ways; the URL included in the certificate is merely for convenience in finding the CRL.  Proxy servers can be set up to cache old, but still valid copies of the CRL and alternative repositories can also be set up to search for missing CRLs.  Also, a mechanism could be set up to auto-detect and download a copy of the CRL from the end-site the browser is connecting to.   Perhaps an optional TLS extension to present a cached copy of all relevant CRLs to the browser or a standard, non-secure HTTP URL where a site could host copies of the CRLs.  A web server can then be set up to retrieve any CRLs relevant to it’s SSL service.

Say, a given CRL is regenerated once a day and has a lifetime of 7 days, a web server could attempt to retrieve the latest copy once each day with a retry on failure once an hour.  Even after six days of continual failure, the web server will still have a valid CRL to offer.  This should leave plenty of margin for a website to retrieve the latest CRLs for it’s HTTPS service.  When a client connects, it could retrieve those CRLs directly from the web server during a DoS attack against it’s CA.  This method is resilient as all the necessary information is now retrievable from the same location, backwards-compatible with the existing system since CAs will still host their CRLs, and secure since all critical information is signed directly by the CA.

Leave a Reply

Your email address will not be published. Required fields are marked *