2014-02-23

Apple's SSL bug - better code reviews required

There's a great technical discussion by Adam Langley at Imperial Violet on the inadvertent security hole that Apple introduced to iOS 7 and later versions of OS X. They've released a patch for iOS (which is how people noticed) but are still working on the OS X fix. My sympathies are with Apple despite them being panned for the delay - the fix is straight forward, but building, qualifying, canarying and distributing the desktop fix inevitably takes a while, and if you try to speed up this process then you have a high risk of making things much, much worse.

The effect of the bug is that it allows a certain kind of attack ("man in the middle") which intercepts secure web connections, say from a user on an Apple laptop to their online banking system. An attacker with sufficient access and resources can pretend to the user to be their online banking server, and the user will have no practical way to detect this. However in practice it is very difficult to exploit, and is only really a concern for users who believe that they may be targeted by government agencies or well-funded and persistent private parties; it's unlikely that it will be widely exploited. Modern iOS and Safari users are not a large fraction of internet traffic, even if you only look at HTTPS traffic.

The bug itself is probably only interesting to code nerds such as your humble correspondent, but how it came about is quite telling about how software development works at Apple.

Here's a cut-down version of the offending function:

static OSStatus
SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams,
                                 uint8_t *signature, UInt16 signatureLen)
{
	OSStatus        err;
	[...]
	if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
		goto fail;
	if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
		goto fail;
		goto fail;
	if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
		goto fail;
	[...]

fail:
	SSLFreeBuffer(&signedHashes);
	SSLFreeBuffer(&hashCtx);
	return err;
}
See that third "goto fail;" line in the middle? That's the error. Almost certainly it was the result of a fat-finger in a code editor, it's very unlikely to be a deliberate change. For tedious reasons related to how code blocks work in the C programming language, the effect of the third "goto fail;" is very different to the first two. It isn't tied to a condition, so if the program manages to get past the first two "if" statements successfully (the initial secure connection checks) then it never carries out the third check. When it reaches the end of the code, the result in the variable "err" actually represents whether the first two checks completed successfully, not (as required) whether all three checks completed successfully.

The reason this interests me is that this change made it into an official Apple release without being detected. I claim that if this code change was reviewed by a human being (as it definitely should have been) then anyone paying the slightest attention would have seen the duplicate "goto fail;" line which would have made absolutely no sense. I can fully understand this error not being picked up by automated testing - it's not straight forward to build a test which could cause this particular case to fail - but this is another indicator that Apple are not paying nearly enough attention to developing software reliably. Getting another person to review your code changes is a basic part of the software development process. If it's not being done, or only being conducted in a cursory fashion, you're going to leave your code riddled with bugs. There is no shortage of bad actors waiting for you to make a mistake like this.

I'm really curious about how this got noticed. My money is on someone browsing the code to make an unrelated change, and being drawn to the duplicate line, but that's only speculation.

I've given Apple heat for their sloppy approach to security in the past and I'm concerned that they're not reacting to the clear signs that they have a problem in this area. If code changes to a key security library are not going through human review, they're going to continue to have problems.

2 comments:

  1. Cut and paste strikes again. I think the developers just started to use a code analysis tool or, actually started to read the output from the tool they were using.

    Java has its faults, but its relative simplicity and interpreted nature meant that IDEs were quick to integrate on the fly code analysis so it was easy to spot fundamental errors like this one.

    Can't help feeling that C/C++ is actually part of the problem. It may be fast and flexible, but that flexibility can make it difficult to automate code analysis.

    ReplyDelete
  2. Yes - you can spot problems like this with compilers' unreachable code analysis, but that's not a standard gcc option (even with -Wall, apparently).
    I'd love for Apple to publish a post mortem on how this snuck through and what measures they're taking to avoid the problem in future, but I fear pigs are likely to fly first.

    ReplyDelete

All comments are subject to retrospective moderation. I will only reject spam, gratuitous abuse, and wilful stupidity.