On February 21, Apple announced that all its most recent versions of operating systems (indeed since 2012), had a critical vulnerability in the SSL implementation. The vulnerability was due to a coding bug in the SSL certificate verification routine. Following is the problematic piece of software.
SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams,uint8_t *signature, UInt16 signatureLen)
if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
The highlighted “goto fail” should not be in the source code. It is equivalent to “if conditions then goto fail else goto fail”. In all cases, it will goto fail! It bypasses all subsequent tests of the SSL certificate. Thus, an attacker could easily use a forged certificate that would be accepted as valid. She could mount a man in the middle attack.
This code is part of Apple public source code. This is due to non-adequate programming practices. Usually, we accept at least two things in software coding:
- Unitary tests; they should have covered all the branches of the if conditions and should have detected this error.
- Non-regression test; if the error was introduced later (as SSL is available for a long time), they should have spotted it.
These practices seem not be in place. This is a major issue for a code that handles security. Security requires the best coding practice. Bugs can easily turn into security vulnerabilities.
In the fuzz, I have even seen conspiracy theory that the bug was introduced by NSA. I would expect NSA to use stealthier techniques than such obvious back door. furthermore, if I am a bad guy who wants to introduce a backdoor, I will design one that I am the only one to be able to exploit. This one is available for every body.
Lesson: Secure software requires best software practice. And a derivative of law 6: Programmer is the weakest link.