Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

2022-12-26

The Twitter Whistleblower report - how bad was Twitter, really?

Prompted by a post by everyone's favourite Portugal-based squirrel-torturing blogger, Tim Worstall, I thought I'd dive into the practical implications of all the (frankly, horrendous) technical, security and privacy problems that Twitter was identified as having before Elon Musk rocked up as owner and CEO.

Usual disclaimer: I'm going by the reports. Reality might be different. I cite where I can.

For background: both USA and European authorities take a dim view of corporate access to, and usage of, individual user data. Remember the European "ePrivacy Directive"? Also known as the "'f+ck these annoying cookie pop-ups' law"... Governments in both Europe and the USA are keenly interested in companies tracking individual users' activities, though my personal opinion is that they're just jealous; they'd like to do it too, but they're just not competent. Anyway, a company doing individual tracking at large scale for profit - Twitter, Google, YouTube, Meta, Amazon - attracts their attention, and their laws.

Security

Let's talk about security - and, more importantly, access to secure data. A fundamental principle of security is "least privilege" - everyone should have the smallest set of access privileges to be able to do their job. You could argue that 5000+ people in Twitter "need" to be able to change things in production at some point to do their jobs, but they certainly don't "need" to have always-on, cross-production access. Not least, because someone running a command they found on an internal playbook as an experiment, could easily break a large chunk of the service. But don't rely on me, ask their job candidates:

Twitter's practice was a huge red flag for job candidates, who universally expressed disbelief. One Vice President of Information Technology [his current role, not the target role] considered withdrawing his application on the (accurate) rationale that Twitter's lack of basic engineering hygiene in their arrangement presaged major headaches.
Hire that guy.

Certainly, every company is far from perfect in this area, but those with regulators are continually seeking to narrow the number of people with access, and the scope of access those people have. Twitter pre-Musk clearly did not give a crap about the count and scope of access. One can only imagine why; were they, for instance, relying on a large base of pre-approved employees to intercept and downgrade/block opinions outside the mainstream? How would we tell if this were not the case? Can Twitter show that they were engaged in a systematic reduction of number and scope of access to production? If not, who will be held to account?

Auditing

Control is one thing - but at least, if a human performs an action in the production environment (change, or query), that action should at least be logged, so future audit can see what happened. This is not a high bar, but was apparently too high for pre-2022 Twitter:

There was no logging of who went into the production environment or what they did.
FFS
To make clear the implications: in general, there was no way of finding out who queried (for their own purposes) or changed (deleted posts, down-rated users, etc) the production environment at any particular time. "Why did [event] happen?" "Beats the hell out of me, someone probably changed something." "Who? When?" "No idea."

This is particularly interesting because Twitter's Chief Information Security Officer - who resigned post-Musk - was also their former head of privacy engineering, and before that, apparently, global lead of privacy technology at Google. One could only imagine what that implies.

Control

There is also a wide range of engineering issues. Data integrity (not losing user-entered data) was obviously a critical issue, but Twitter had been aware for a while that they teetered on the edge of a catastrophic production data loss:

even a temporary but overlapping outage of a small number of datacenters would likely [my italics] result in the service going offline for weeks, months, or permanently.
This is not quite as bad as it first seems. After a year or so in operation, companies have a fairly good idea what happens with a datacenter outage - because they're more frequent than you imagine. Say, Henry the intern accidently leans against the Big Red Button on the datacenter floor, that cuts power to everywhere. Or you do a generator test, only to discover that a family of endangered hawks have made their nest in the generator housing for Floor 2... So you get used to (relatively) small-scale interruptions.

If you want to run a global service, though, you need to be able to tolerate single site outages as routine, and multiple site outages (which turn out to be inevitable) have to be managed within the general bounds of your service's promised availability - and latency, and data availability. Even if all your physical locations are very separate, there will inevitably be common cause failures - not least, when you're pushing binary or config changes to them. So, don't wait for these events to sneak up on you - rather, anticipate them.

This means that you have to plan for, and practice these events. If you're not doing so, than a) it will be obvious to anyone asking questions in this area, and b) when things inevitably do run off the rails, there will be bits of burning infrastructure scattered everywhere, around the highly-paid morons who are busy writing memos to cover their asses: "how could we have foreseen this particular event? Clearly, it wasn't our fault, but pay us 20% extra and we might catch or mitigate the next such event."

Go looking for those people. Fire them, and throw them into a den of hungry pigs.

Leaving the doors open

By far the most horrific aspect, however, was the general relaxed attitude about government agencies - and heaven only knows what other NGOs, cabals, and individuals - having under-the-table access to Twitter's data. Just the tolerance of user-installed spyware on privileged devices would be enough for any sane security engineer to be tearing out their hair, but actually letting in individuals known to be employed by foreign - and even domestic - governments for the purposes of obtaining intelligence information, and potentially affecting the flow of information to their and other countries... one is lost for words.

At some stage, Twitter had to either grow up, or close down. Under Dorsey's crew, the latter was inevitable - and likely not far away. It's still too early to tell if Musk can get them to option 1, but there's still hope.

2019-07-26

Scentrics is still worth half a billion quid, and other fiction

Suppose that you were a UK company with a real valuation of £500M. Would you - or indeed, your shareholders - tolerate you teetering on the edge of being de-listed as a UK company?

If your company name is Scentrics, it appears that you would:

Date: 04/06/2019
Ref: DEF6/06539484

Companies Act 2006 (Section 1000(3))

The Registrar of Companies gives notice that, unless cause is shown to the contrary, at the expiration of 2 months from the above date the name of
SCENTRICS INFORMATION SECURITY TECHNOLOGIES LIMITED
will be struck off the register and the company will be dissolved

Apparently Scentrics, as a nominal £500M valuation company, finds it too expensive to employ a £20K/year admin to ensure that their basic legal obligations to their national legal authority are covered.

But they are totally still worth half a billion quid, if you're a prospective investor. Swear it, cross my heart.

Scentrics did actually fix the problem - at least, for now:

Date: 06/07/2019

Cause has been shown why the above company should not be struck off the register and accordingly the Registrar is taking no further action under section 1000 of the Companies Act 2006 pursuant to the Notice dated 03/07/2019
Presumably, the previous notice scared the cr*p out of the Scentrics directors and got them to scramble to address the cause of the proposed strike-off. I'd give a good few quid to know that cause, by the way.

The Scentrics accounts for year end June 2018 still assert that Scentrics is worth a bit short of half a billion quid, but the directors certainly aren't acting like this is actually true; per the doctrine of revealed preference, one is left with the conclusion that those owning the shares believe that it's worth, on the balance of probabilities, only a small multiple of the employment costs of a part-time admin assistant with UK company law knowledge.

Sure, it's totally worth £500M. Practically all of its assets are intangible, it has £90M of liabilities, and no-one has taken a good hard look at its accounts. I have £100 that says it will not have more than £10K of tangible assets in 5 years time. Anyone like to take the other side of that bet?

2018-09-06

Scentrics worth half a billion quid - and other fiction

Regular readers (both of you) will recall my previous scepticism regarding IT "security" company Scentrics. TL;DR - they're pushing the idea that a key part of "secure" email is sending a copy of every email to a central server, encrypted with a key that only gives access to a trusted party - your local government, for instance. Singapore seemed very interested in their proposals, for reasons one can imagine.

Out of idle curiosity, I thought I'd check the Scentrics accounts for 2016-2017. Well, gosh.

 30 June 2017
£
30 June 2016
£
Fixed assets  
Intangible assets504,014,09220,455
Property, plant and equipment6,4638,618
Investments10-
 504,020,56529,073
Current assets  
Debtors1,051,5561,047,027
Cash at bank893,8152,793,822
 1,945,3713,840,849
Creditors within 1 year(893,718)(893,232)
Net current assets1,051,6532,947,617
Total assets less current liabilities505,072,2182,976,690
Provision for liabilities(99,546,235) 
Net assets405,525,9832,976,690
Capital and reserves  
Called up share capital130130
Share premium5,778,5965,778,596
Retained earnings399,747,257(2,802,036)
 405,525,9832,976,690

How would I read this? They spent £1.9M of their cash on various things during the year; about half of that on medium-to-long term debt servicing, and the rest presumably on overheads (salary, office, patent office fees, other professional service fees). This is clearly not sustainable, and indeed last year they had a net worth (retained earnings) of minus 2.8 million pounds. How could this be fixed?

Well, they've just gained £504 million in intangible assets. The associated notes indicate a "revaluation" of their intangibles happened, which changed from £22K to £560M. There was a 10% amortisation charge ("spreading out") over the year, taking them down to a measly £504M. That's quite a change, what was involved?

Patents and licences were valued on an open market basis on 20 August 2018 by the Directors
There's also the useful information:
Patents and licences are being amortised evenly over their estimated useful life of ten years.
But there's no obvious licence revenue in the company accounts that I can see, and there's still only 4 employees (the directors) so they're not doing anything substantial with the resources, so I'd bet this £560M change is an evaluation of the worth of their patents. Let's look at these, shall we?

The main Scentrics patents pivot around the previously discussed system where a client (mobile, in the most recent patents, but there's nothing specifically "mobile" about them) talks to a centralised mail server to obtain encryption keys to safely send messages to it for routing onwards a destination, and then separately sends a copy of the message (asynchronously! wow, there's some modern thinking) to a "monitoring" server using a different encryption key.

Basically, it's a system for a company or government to enable scanning of email sent by its employees/citizens - as long as they're using its mail application, of course. If the employees use Outlook.com, Gmail, or any number of other public webmail services, they are sunk. So companies will block all the webmail applications by restricting the web browsers in their corporate devices, forcing use of the corporate mail server (Outlook, most likely) which they can snoop on. They don't need Scentrics' patents. Governments would need a willing population to live with the (likely) crappy, unreliable custom email application and not look elsewhere for their email needs. Even China struggles to keep up with restricting their population to approved websites, and they're a gosh-darned communist dictatorship.

It's not impossible that Scentrics reckons they can get a major corporation or government to licence their patents, but I'd have to rate it as unlikely at best. Why would someone pay £500M for it, rather than (say) £5M to get a moderately competent cryptographer to design a better system? The patent is extremely dubious to defend in my personal technical opinion; there are alternative strategies such as encrypting the message with a randomized key, encrypting that key with a) the recipient's key and b) the monitoring service's key, and enclosing both encrypted keys in the message. Then the client only has to send one message, and the monitoring service can store it and decrypt it on demand. But hey, what do I know.

Guru Paran Chandrasekaran and Andrea Bittau - happy to bring you gents up to speed on the state of modern cryptography, if you're interested. No charge!

(They've finally fixed their https problem. Guess it got a bit embarrassing.)

Update: Looks like Andrea Bittau was killed in a motorcycle crash last year. Nothing sinister, just terribly sad - 34 years old.

2018-07-08

How to kill Trusteer's Rapport stone dead

If you, like me, have had to wrangle with a slow and balky family member's Mac, you may have found the root cause of the slowness to be Rapport. This is an IBM-branded piece of "security" software, and has all the user friendliness and attention to performance and detail that we expect from Big Blue - to wit, f-all.

I therefore followed the comprehensive instructions on uninstalling Rapport which were fairly easy to step through and complete. Only problem - it didn't work. The rapportd daemon was still running, new programs were still very slow to start, and there was no apparent way forward.

Not dissuaded, I figured out how to drive a stake through its heart. Here's how.

Rapport start-up

Rapport installs a configuration in OS X launchd which ensures its daemon (rapportd) is started up for every user. The files in /Library/LaunchAgents and /Library/LaunchAgents are easy to remove, but the original files are in /System/Library/LaunchAgents and /System/Library/LaunchDaemons and you need to kill those to stop Rapport.

However, System Integrity Protection (SIP) on OS X El Capitan and later prevents you from deleting files under /System - even as root.

Given that, the following instructions will disable SIP on your Mac, remove the Rapport files, and re-enable SIP. You should be left with a Mac that is no longer burdened by Rapport.

Check whether Rapport is running

From a Terminal window, type
ps -eaf | grep -i rapport
If you see one or more lines mentioning rapportd then you have Rapport running and you should keep going; if not, your problems lie elsewhere.

Disable SIP

Reboot your machine, and hold down COMMAND+R as the machine restarts. This brings you into Recovery mode. From the menu bar, choose Utilities → Terminal to open up a Terminal window. Then type
csrutil disable
exit

Now reboot and hold down COMMAND+S as the machine restarts to enter single-user mode (a black background and white text).

Find and delete the Rapport files

You'll need to make your disk writeable, so enter the two commands (which should be suggested in the text displayed when you enter single user mode):
/sbin/fsck -fy
/sbin/mount -uw /

Now
cd /System/Library/LaunchAgents
and look for the Rapport files:
ls *apport*
You can then remove them:
rm com.apple.RapportUI*
rm com.apple.rapport*

Then
cd ../LaunchDaemons
and look for the Rapport files there:
ls *apport*
You can then remove them too:
rm com.apple.rapportd*

Restore SIP

Rapport should now be dead, but you should re-enable SIP. Reboot and hold down COMMAND+R to go back to Recovery mode. From the menu bar, choose Utilities → Terminal to open up a Terminal window. Then type
csrutil enable
exit

Reboot, and you should be done. Open a Terminal window, type
ps -eaf | grep -i rapport
and verify that rapportd no longer appears.

2017-07-14

Unintended consequences of the TSA regulations

You'd have to be astonishingly ill-informed to believe that you could waltz through USA airport security with any recognisable knife in your carry-on luggage. TSA regulations specify:

Knives
Carry On Bags: No
Checked Bags: Yes
Except for plastic or round bladed butter knives.
Now, I'd read that as "you can have any kind of knife in your checked baggage apart from plastic knives or round bladed butter knives" but I'm a pedant; the overall guidance is clear.

A couple of weeks ago, my pal Harry turned up to San Francisco International Airport (motto: "Fogged in by design") for a flight up to Washington state. He wasn't checking any luggage, just carrying a backpack. Shortly before security he reached into the side pocket of his pack to get his passport, and while fumbling around he came across the folding knife that he'd left in there on his last hiking trip.

Crap.

Oh well, better to discover it now than later. He could have surrendered it to the TSA contractors but it had been an expensive knife when he'd bought it, and he'd had it a long time. He was damned if a TSA-contracted monkey was going to take it from him.

Not a problem! Airport Mailers are a company that allow you to mail items to yourself from the land-side of an airport. Harry walked over to the Airport Mailers kiosk and asked them for a pouch to mail his knife back.

"Sorry, we're out of pouches." Apparently they'd been out for most of the past week, and were "optimistic" of getting a delivery in the next few days, but of course that did not help Harry. Harry starts to see why there is less than enthusiastic endorsement for this firm.

Harry realises that he could just drop the knife in the "Sin Bin" box at security, but he would lose it forever and it was an expensive knife when he bought it, and has a lot of sentimental value. Pondering the problem, his gaze alights on the plants in tubs used to decorate the hall:

"Problem solved!" Harry pulls out his folded knife, palms it, and sidles to the corner of one of the planters containing a particularly bushy plant. He casually slips his hand under the leaves and gropes around trying to dig unobtrusively a hole in the soil to fit his knife.

This admittedly ingenious strategy is sadly not original; as Harry pokes his fingers into the soil, he discovers a wooden object that is indisputably a knife handle. It seems that, as he pokes around, what feels like half of the planter is taken up with buried knives.

Harry, undissuaded, finds an undisturbed corner of the planter, furtively buries his knife, and heads off to the gate. 48 hours later he's back, coming out of Arrivals. He wheels left, locates the planter, digs his knife out from the corner, and strolls off to his car. In the process he discovers that the other knife has vanished.

No doubt the TSA would posit this as a security "win", but it's not obvious that this is true. People are stashing knives all over San Francisco airport, and seem to be able to rely on picking them up again when they return. If they can manage this in a heavily-patrolled airport departures area, how effective do you think the TSA Security Theatre is at keeping hundreds of aircraft in an "allowed" state?

2017-05-12

Downsides of an IT monolith (NHS edition)

I have been watching, with no little schadefreude (trans. "damage joy") today's outage of many NHS services as a result of a ransomware attack.

This could happen to anyone, n'est ce pas? The various NHS trusts affected were just unlucky. They have many, many users (admin staff in each GP's surgery; nurses, auxiliaries and doctors rushing to enter data before dashing off to the next patient). Why is it unsurprising that this is happening now?

The NHS is an organisational monolith. It makes monolithic policy announcements. As a result of those policies, Windows XP became the canonical choice for NHS PCs. It is still the canonical choice for NHS PCs. Windows XP launched to the public in late 2001. Microsoft ended support for Windows XP in April 2014. Honestly, I have to give Microsoft kudos for this (oh, that hurts) because they kept XP supported way beyond any reasonable timeframe. But all good things come to an end, and security updates are no longer built for XP. The NHS paid Microsoft for an extra year of security patches but decided not to extend that option beyond 2015, presumably because no-one could come up with a convincing value proposition for it. Oops.

The consequences of this were inevitable, and today we saw them. A huge userbase of Internet-connected PCs no longer receiving security updates is going to get hit by something - they were a bit unlucky that it was ransomware, which is harder to recover from than a straight service-DoS, but this was entirely foreseeable.

Luckily the NHS mandates that all critical operational data be backed up to central storage services, and that its sites conduct regular data-restore exercises. Doesn't it? Bueller?

I don't want to blame the central NHS IT security folks here - I'm sure they do as good a job as possible in an impossible-to-manage environment, and that the central patient data is fairly secure. However, if you predicate effective operations for most of the NHS on data stored on regular PCs then you really want to be sure that they are secure. Windows XP has been end-of-support for three gold-durned years at this point, and progress in getting NHS services off it has been negligible. You just know that budget for this migration got repurposed for something else more time-sensitive "temporarily".

This is a great example of organisational inertia, in fact maybe a canonical one. It was going to be really hard to argue for a massively expensive and disruptive change, moving all NHS desktops to a less-archaic OS - Windows 10 seems like a reasonable candidate, but would still probably require a large proportion of desktops and laptops to be replaced. As long as nothing was on fire, there would be a huge pushback on any such change with very few people actively pushing for it to happen. So nothing would happen - until now...

Please check back in 2027 when the NHS will have been on Windows 10 for 8 years, 2 years end-of-life, and the same thing will be happening again.

2016-12-27

Scentrics finds that security is hard

Two years ago I wrote about Scentrics and their "Key Man" security proposal. I wondered idly what had happened there so did some Googling. Turns out that I'm the top two hits for [scentrics key man] which is heart-warming for me but suggests that their world-beating security patent might have sunk like a stone...

I went to their website www.scentrics.com and noted that it didn't redirect to https. I tried https://www.scentrics.com and lo! Chrome's Red "Not secure" Warning of Death appears. Seems that Scentrics can't even secure their website, which is not a little ironic when their home page trumpets "Secure with Scentrics".

All the pages on the site - even "Overview and Vision" and "Careers" - are hidden behind a sign-on box, declaring the website "invitation only" and inviting you to contact "admin@scentrics.com" if you'd like access. You can view headers, but that's about it. You wonder why they would be so sensitive about exposing information like that.

The 2016 news included a nugget from the Daily Telegraph in June:

Scentrics is poised to seek new funding that would value the company at more than $1 billion as it prepares to rollout its infrastructure for the first time.
"Poised", huh? I like that. I read that as "not yet ready". I also like the uncritical write-up of the company's pitch:
Individual messages and documents sent over the internet can be unlocked without compromising the overall security of the network, according to Scentrics's pitch to operators and governments.
Remember that this essentially involved encrypting one copy of a message with the recipient's public key, and another with a government/agency public key, and storing the latter to give the agency access on demand. The government and security agencies involved might not think that this "compromises" the overall security of the network, but as a consumer of the network's function I can assure them that I'd feel very differently. And of course for this to be effective all network users would have to use a very small ecosystem of only approved apps / browsers which implemented this dual encryption, and maintained the central repository of government-friendly encrypted messages. I'm sure there's no risk of systematic system compromise there by insiders at all.

Companies House shows three officers plus a secretarial company including our old friend Guruparan "Paran" Chandrasekaran. Looks like Sir Francis Mackay, David Rapoport and Dr. Thaksin Shinawatra resigned since 2014, which is interesting because the latter gent used to be the Prime Minister of Thailand, and Scentrics trumpted his role in the Telegraph piece, but as of 1 month ago he's out of his company role.

According to their June 2015 accounts they have about GBP4.2M in net assets, looks like they had an infusion of about GBP4.5M during the year. Going from this to a $1bn valuation seems... optimistic.

Update: Looks like Scentrics are diving into Singapore with advertisements for Project Manager and Devops roles there. This seems to be part of the Singapore government's "Smart Nation" project for a unified network in Singapore:

  • A Smart Nation is one where people are empowered by technology to lead meaningful and fulfilled lives.
  • A Smart Nation harnesses the power of networks, data and info-comm technologies to improve living, create economic opportunity and build a closer community.
  • A Smart Nation is built not by Government, but by all of us - citizens, companies, agencies. This website chronicles some of our endeavours and future directions.
Cutting through the marketing speak, Singaporeans will be using a government-provided network for all services including personal and business communication. With Scentrics playing a role, the benevolent semi-dictatorship of Singapore will be able to snoop on all its citizens' internal communications at will.

Scentrics seems to be very comfortable enabling a government's surveillance on its citizens. I wonder how this is going to work out for them long-term given the distinctly libertarian tilt of most software engineers.

[Disclaimer: no share position in Scentrics. Financially I don't care if they live or die. Personally, I'd incline towards the latter.]

2015-05-13

You should care about moving to HTTPS

Eric Mill's "We're Deprecating HTTP and it's going to be okay" is a must-read call-to-arms for everyone with a site on the Internet, explaining why the transition from unencrypted web traffic (HTTP) to encrypted (HTTPS) is actually fundamental to the future existence of the democratic web-as-we-know it.

For the 90% of my reading audience who are already saying "Bored now!" here's why it matters to you. Sir Tim Berners-Lee invented HTTP (the language of communication between web browser and web server) in CERN, a European haven of free thought, trust and international co-operation. The 1930s idea that "Gentlemen do not read each other's mail" was - surprisingly, given the history of cryptographic war in WW2 - fundamental to HTTP; messages might have transited systems owned by several different groups, but none of them would have thought to copy the messages passing through their system, let alone amend them.

This worked fine as long as no-one was interested in the communication of harmless nerds about their hobbies, much as the government-owned Royal Mail doesn't bother to copy the contents of postcards passing through their sorting offices because they only contain inane drivel about sun, sea and sand. However, once people realized that they could communicate freely about their occasionally subversive ideas across borders and continents, and financial institutions woke to the possibility of providing services without paying for expensive un-scalable fallible human cashiers, many governments and other less-legal entities wanted to read (and sometimes alter) Internet traffic.

Mills gives two great examples of where HTTPS prevented - and could have prevented further - nation-state abuse of Internet content:

- The nation of India tried and failed to ban all of GitHub. HTTPS meant they couldn't censor individual pages, and GitHub is too important to India's tech sector for them to ban the whole thing.
- The nation of China weaponized the browsers of users all over the world to attack GitHub for hosting anti-censorship materials (since like India, they can't block only individual pages) by rewriting Baidu's unencrypted JavaScript files in flight.
And closer to home, Cameron's plan to make all online communication subject to monitoring is so stupidly illiberal and expensively pointless that it deserves to be made impractical by general adoption of HTTPS. GCHQ and friends can tap all the Internet traffic they like: if it's protected by HTTPS, the traffic is just taking up disk space to no practical purpose. Brute-forcing, even with nation-state resources, is so expensive that it's reserved for really high-value targets. GCHQ would have to go after something fundamental like a Certificate Authority, which would leave big and obvious fingerprints, or compromise a particular user's machine directly, which doesn't scale.

As long as users are still relaxed about the absence of a padlock in their browser bar, HTTP will continue to provide a route for governments to snoop on their citizens' traffic. So let's give up on HTTP - it has had its day - and move to a world where strongly encrypted traffic is the default.

2015-04-02

Active attack on an American website by China Unicom

I wondered what the next step in the ongoing war between Western content and Chinese censorship might be. Now we have our answer.

"Git" is a source code repository system which allows programmers around the world to collaborate on writing code: you can get a copy of a software project's source code onto your machine, play around with it to make changes, then send those changes back to Git for others to pick up. Github is a public website (for want of a more pedantic term) which provides a repository for all sorts of software and similar projects. The projects don't actually have to be source code: anything which looks like plain text would be fine. You could use Github to collaborate on writing a book, for instance, as long as you used mostly text for the chapters and not e.g. Microsoft Word's binary format that makes it hard for changes to be applied in sequence.

Two projects on Git are "greatfire" and "cn-nytimes" which are, respectively, a mirror for the Greatfire.org website focused on the Great Firewall of China, and a Chinese translation of the New York Times stories. These are, obviously, not something to which the Chinese government wants its citizenry to have unfettered access. However, Github has many other non-controversial software projects on it, and is actually very useful to many software developers in China. What to do?

Last week a massive Distributed Denial of Service (DDoS) attack hit Github:

The attack began around 2AM UTC on Thursday, March 26, and involves a wide combination of attack vectors. These include every vector we've seen in previous attacks as well as some sophisticated new techniques that use the web browsers of unsuspecting, uninvolved people to flood github.com with high levels of traffic. Based on reports we've received, we believe the intent of this attack is to convince us to remove a specific class of content. [my italics]
Blocking Github at the Great Firewall - which is very easy to do - was presumably regarded as undesirable because of its impact on Chinese software businesses. So an attractive alternative was to present the Github team with a clear message that until they discontinued hosting these projects they would continue to be overwhelmed with traffic.

If this attack were just a regular DDoS by compromised PCs around the world it would be relatively trivial to stop: just block the Internet addresses (IPs) of the compromised PCs until traffic returns to normal levels. But this attack is much more clever. It intercepts legitimate requests from worldwide web browsers for a particular file hosted on China's Baidu search engine, and modifies the request to include code that commands repeated requests for pages from the two controversial projects on Github. There's a good analysis from NetreseC:

In short, this is how this Man-on-the-Side attack is carried out:
1. An innocent user is browsing the internet from outside China.
2. One website the user visits loads a JavaScript from a server in China, for example the Badiu Analytics script that often is used by web admins to track visitor statistics (much like Google Analytics).
3. The web browser's request for the Baidu JavaScript is detected by the Chinese passive infrastructure as it enters China.
4. A fake response is sent out from within China instead of the actual Baidu Analytics script. This fake response is a malicious JavaScript that tells the user's browser to continuously reload two specific pages on GitHub.com.

The interesting question is: where is this fake response happening? We're fairly sure that it's not at Baidu themselves, for reasons you can read in the above links. Now Errata Security has done a nice bit of analysis that points the finger at the Great Firewall implementation in ISP China Unicom:

By looking at the IP addresses in the traceroute, we can conclusive prove that the man-in-the-middle device is located on the backbone of China Unicom, a major service provider in China.
That existing Great Firewall implementors have added this new attack functionality fits with Occam's Razor. It's technically possible for China Unicom infrastructure to have been compromised by patriotically-minded independent hackers in China, but given the alternative that China Unicom have been leant on by the Chinese government to make this change, I know what I'd bet my money on.

This is also a major shift in Great Firewall operations: this is the first major case I'm aware of that has them focused on inbound traffic from non-Chinese citizens.

Github look like they've effectively blocked the attack, after a mad few days of scrambling, and kudos to them. Now we have to decide what the appropriate response is. It seems that any non-encrypted query to a China-hosted website would be potential fair game for this kind of attack. Even encrypted (https) requests could be compromised, but that would be a huge red arrow showing that the company owning the original destination (Baidu in this case) had been compromised by the attacker: this would make it 90%+ probable that the attacker had State-level influence.

If this kind of attack persists, any USA- or Europe-focused marketing effort by Chinese-hosted companies is going to be thoroughly torpedoed by the reasonable expectation that web traffic is going to be hijacked for government purposes. I wonder whether the Chinese government has just cut off its economic nose to spite its political face.

2015-03-04

What does "running your own email server" mean?

There's lots of breathless hyperbolae today about Hillary Clinton's use of a non-government email address during her tenure as Secretary of State. The Associated Press article is reasonably representative of the focus of the current debate:

The email practices of Hillary Rodham Clinton, who used a private account exclusively for official business when she was secretary of state, grew more intriguing with the disclosure Wednesday that the computer server she used traced back to her family's New York home, according to Internet records reviewed by The Associated Press.
[...]
It was not immediately clear exactly where Clinton's computer server was run, but a business record for the Internet connection it used was registered under the home address for her residence in Chappaqua, New York, as early as August 2010. The customer was listed as Eric Hoteham.
Let's apply a little Internet forensics to the domain in question: clintonemail.com. First, who owns the domain?
$ whois clintonemail.com
[snip]
Domain Name: CLINTONEMAIL.COM
Registry Domain ID: 1537310173_DOMAIN_COM-VRSN
Registrar WHOIS Server: whois.networksolutions.com
Registrar URL: http://networksolutions.com
Updated Date: 2015-01-29T00:44:01Z
Creation Date: 2009-01-13T20:37:32Z
Registrar Registration Expiration Date: 2017-01-13T05:00:00Z
Registrar: NETWORK SOLUTIONS, LLC.
Registrar IANA ID: 2
Registrar Abuse Contact Email: abuse@web.com
Registrar Abuse Contact Phone: +1.8003337680
Reseller:
Domain Status:
Registry Registrant ID:
Registrant Name: PERFECT PRIVACY, LLC
Registrant Organization:
Registrant Street: 12808 Gran Bay Parkway West
Registrant City: Jacksonville
Registrant State/Province: FL
Registrant Postal Code: 32258
Registrant Country: US
Registrant Phone: +1.5707088780
Registrant Phone Ext:
Registrant Fax:
Registrant Fax Ext:
Registrant Email: kr5a95v468n@networksolutionsprivateregistration.com
So back in January this year the record was updated, and we don't necessarily know what it contained before that, but currently Perfect Privacy, LLC are the owners of the domain. They register domains on behalf of people who don't want to be explicitly tied to that domain. That's actually reasonably standard practice: any big company launching a major marketing initiative wants to register domains for their marketing content, but doesn't want the launch to leak. If Intel are launching a new microbe-powered chip, they might want to register microbeinside.com without their competitors noticing that Intel are tied to that domain. That's where the third party registration companies come in.

The domain record itself was created on the 13th of January 2009, which is a pretty strong indicator of when it started to be used. What's interesting, though, is who operates the mail server which receives email to this address. To determine this, you look up the "MX" (mail exchange) records for the domain in question, which is what any email server wanting to send email to hillary@clintonemail.com would do:

$ dig +short clintonemail.com MX
10 clintonemail.com.inbound10.mxlogic.net.
10 clintonemail.com.inbound10.mxlogicmx.net.
mxlogic.net were an Internet hosting company, bought by McAfee in 2009. So they are the ones running the actual email servers that receive email for clintonemail.com and which Hillary's email client (e.g. MS Outlook) connected to in order to retrieve her new mail.

We do need to take into account though that all we can see now is what the Internet records point to today. Is there any way to know where clintonemail.com's MX records pointed to last year, before the current controversy? Basically, no. Unless someone has a hdr22@clintonemail.com mail from her home account which will have headers showing the route that emails took to reach her, or has detailed logs from their own email server which dispatched an email to hdr22@clintonemail.com, it's probably not going to be feasible to determine definitively where she was receiving her email. However, CBS News claims that the switch to mxlogic happened in July 2013 - that sounds fairly specific, so I'll take their word for it for now. I'm very curious to know how they determined that.

All of this obscures the main point, of course, which is that a US federal government representative using a non-.gov email address at all for anything related to government business is really, really bad. Possibly going-to-jail bad, though I understand that the specific regulation requiring a government employee to use a .gov address occurred after Hillary left the role of SecState (Feb 2013). Still, if I were the Russian or Chinese foreign intelligence service, I'd definitely fancy my chances in a complete compromise of either a home-run server, or of a relatively small-scale commercial email service (mxlogic, for instance).

Desperately attempting to spin this whole situation is Heidi Przybyla from Bloomberg:

OK, let's apply our forensics to jeb.org:
$ dig +short jeb.org MX
5 mx1.emailsrvr.com.
10 mx2.emailsrvr.com.
emailsrvr.com is, like mxlogic.net, a 3rd party email hosting service, apparently specialising in blocking spam. I'm not surprised that someone like Jeb Bush uses it. And, like Hillary, he isn't "running his own email server", he's using an existing commercial email server. It's not Gmail/Outlook.com/Yahoo, but there's not reason to think it's not perfectly serviceable, and it's not controlled by Bush so if they log or archive incoming or outgoing email his correspondence is legally discoverable.

The difference between Jeb Bush and Hillary Clinton of course, as many others note, is that Jeb is not part of the US federal government and hence not subject to federal rules on government email...

2014-12-24

Scentrics, "Key Man" and mobile security, oh my

From a story in the Daily Mail today I found this October article in the Evening Standard about security firm Scentrics which has been working with UCL

In technical parlance, Scentrics has patented the IP for “a standards-based, fully automatic, cryptographic key management and distribution protocol for UMTS and TCP/IP”. What that translates as in layman’s language is “one-click privacy”, the pressing of a button to guarantee absolute security.
Where issues of national security are concerned, the ciphers used are all government-approved, which means messages can be accessed if they need to be by the security services. What it also signals in reality is a fortune for Scentrics and its wealthy individual shareholders, who each put in £500,000 to £10 million.
Hmm. That's a fairly vague description - the "government-approved" language makes it look like key escrow, but it's not clear. I was curious about the details, but there didn't seem to be any linked from the stories. Chandrasekaran was also touting this in the Independent in October, and it's not clear why the Mail ran with the story now.

I tried googling around for any previous news from Scentrics. Nada. So I tried "Paran Chandrasekaran" and found him back in 2000 talking about maybe netting £450M from the prospective sale of his company Indicii Salus. I couldn't find any announcements about the sale happening, but it looks like email security firm Comodo acquired the IP from Indicii Salus in March 2006. According to Comodo's press release

The core technology acquired under this acquisition includes Indicii Salus Limited's flagship security solution which, unlike other PKI offerings, is based on server-centric architecture with all information held securely in a central location thus providing a central platform necessary to host and administer central key management solutions.
That's a single-point-of-failure design of course - when your central server is down, you are screwed, and all clients need to be able to authenticate your central server so they all need its current public key or similar signature validation. It's not really world-setting-on-fire, but hey it's 8 years ago.

Then LexisWeb turns up an interesting court case: Indicii Salus Ltd v Chandrasekaran and others with summary "Claimant [Indicii Salus] alleging defendants [Chandrasekaran and others] intending to improperly use its software - Search order being executed against defendants - Defendants applying to discharge order - Action being disposed of by undertakings not to improperly use software"

Where the claimant had brought proceedings against the defendants, alleging that they intended to improperly use its software in a new business, the defendants' application to discharge a search order, permitting a search of the matrimonial home of the first and second defendants, would be dismissed.
The case appears to be fairly widely quoted in discussions of search+seizure litigation. I wonder whether Paran Chandrasekaran was one of the defendants here, or whether they were other family members? There's no indications of what happened subsequently.

How odd. Anyway, here's a sample of the Scentrics patent (USPTO Patent Application 20140082348):

The invention extends to a mobile device configured to:
send to a messaging server, or receive from a messaging server, an electronic message which is encrypted with a messaging key;
encrypt a copy of the message with a monitoring key different from the messaging key; and
send the encrypted copy to a monitoring server remote from the messaging server.
[...]
Thus it will be seen by those skilled in the art that, in accordance with the invention, an encrypted copy of a message sent securely from the mobile device, or received securely by it, is generated by the device itself, and is sent to a monitoring server, where it can be decrypted by an authorized third party who has access to a decryption key associated with the monitoring key. In this way, an authorized third party can, when needed, monitor a message without the operator of the messaging server being required to participate in the monitoring process.
Because both the message and its copy are encrypted when in transit to or from the mobile device, unauthorized eavesdropping by malicious parties is still prevented.
This reads to me like "given a message and a target, you encrypt it with a public key whose private key is held by your target and send it to the target as normal, but you also encrypt it with a separate key known to a potential authorized snooper and send it to their server so that they can access if they want to."

WTF? That's really not a world-beating million-dollar idea. Really, really it's not. Am I reading the wrong patent here? Speaking personally, I wouldn't invest in this idea with five quid I found on the street.

2014-11-04

A caricature of Civil Service placement and rhetoric

The new director of GCHQ was announced earlier this year as Robert Hannigan, CMG (Cross of St Michael and St George, aka "Call Me God") replacing the incumbent Sir Iain Lobban, KCMG (Knight's Cross of St Michael and St George, aka "Kindly Call Me God"). Whereas Sir Iain was a 30 year veteran of GCHQ, working his way up from a language specialist post, Hannigan was an Oxford classicist - ironically at Wadham, one of the few socialist bastions of the university - and worked his way around various government communications and political director posts before landing a security/intelligence billet at the Cabinet office. Hannigan is almost a cliché of the professional civil servant.

Hannigan decided to write in the FT about why Facebook, Twitter and Google increasing user security was a Bad Thing:

The extremists of Isis use messaging and social media services such as Twitter, Facebook and WhatsApp, and a language their peers understand. The videos they post of themselves attacking towns, firing weapons or detonating explosives have a self-conscious online gaming quality. [...] There is no need for today’s would-be jihadis to seek out restricted websites with secret passwords: they can follow other young people posting their adventures in Syria as they would anywhere else.
Right - but the UK or US governments can already submit requests to gain access to specific information stored by Facebook, Google, Twitter et al. What Hannigan leaves out is: why is this not sufficient? The answer, of course, is that it's hard to know where to look. Far easier to cast a dragnet through Internet traffic, identify likely sources of extremism, and use intelligence based on their details to ask for specific data from Facebook, Google, Twitter et al. But for the UK in the first half of 2014, the UK issued over 2000 individual requests for data, covering an average of 1.3 people per request. How many terrorism-related arrests (never mind convictions) correspond to this - single digits? That's a pretty broad net for a very small number of actual offenders.

Hannigan subsequently received a bitchslap in Comment is Free from Libdem Julian Huppert:

Take the invention of the radio or the telephone. These transformed the nature of communication, allowing people to speak with one another across long distances far more quickly than could have ever been imagined. However, they also meant that those wishing to do us harm, whether petty criminals or terrorists, could communicate with each other much more quickly too. But you wouldn’t blame radio or phone manufacturers for allowing criminals to speak to each other any more than you would old Royal Mail responsible for a letter being posted from one criminal to another.
Good Lord, I'm agreeing with a Libdem MP writing in CiF. I need to have a lie down.

Hannigan is so dangerous in his new role because he's never really had to be accountable to voters (since he's not a politician), nor influenced by the experience and caution of the senior technical staff in GCHQ (since he never worked there). He can view GCHQ as a factory for producing intelligence to be consumed by the civil service, not as a dangerous-but-necessary-in-limited-circumstances intrusion into the private lives of UK citizens. After all, he knows that no-one is going to tap his phone or read his email.

Personally, I'd like to see a set of 10 MPs, selected by public lottery (much like the National Lottery draw, to enforce fairness) read in on GCHQ and similar agency information requests. They'd get to see a monthly summary of the requests made and information produced, and would be obliged to give an annual public report (restricted to generalities, and maybe conducted 6 months in arrears of the requests to give time for data to firm up) on their perception of the width of the requests vs information retrieved. That's about 40 Facebook personal data trawls per MP, which is a reasonably broad view of data without excessive work. Incidentally, I'd also be interested in a breakdown of the immigration status of the people under surveillance.

2014-10-22

State-endorsed web browsers turn out to be bad news

Making the headlines in the tech world this week has been evidence of someone trying to man-in-the-middle Chinese iCloud users:

Unlike the recent attack on Google, this attack is nationwide and coincides with the launch today in China of the newest iPhone. While the attacks on Google and Yahoo enabled the authorities to snoop on what information Chinese were accessing on those two platforms, the Apple attack is different. If users ignored the security warning and clicked through to the Apple site and entered their username and password, this information has now been compromised by the Chinese authorities. Many Apple customers use iCloud to store their personal information, including iMessages, photos and contacts. This may also somehow be related again to images and videos of the Hong Kong protests being shared on the mainland.
MITM attacks are not a new phenomenon in China but this one is widespread, and clearly needs substantial resources and access to be effective. As such, it would require at least government complicity to organise and implement.

Of course, modern browsers are designed to avoid exactly this problem. This is why the Western world devotes so much effort to implementing and preserving the integrity of the "certificate chain" in SSL - you know you're connecting to your bank because the certificate is signed by your bank, and the bank's signature is signed by a certificate authority, and your browser already knows what the certificate authority's signature looks like. But it seems that in China a lot of people use Qihoo 360 web browser. It claims to provide anti-virus and malware protection, but for the past 18 months questions have been asked about its SSL implementation:

If your browser is either 360 Safe Browser or Internet Explorer 6, which together make up for about half of all browsers used in China, all you need to do is to click continue once. You will see no subsequent warnings. 360's so-called "Safe Browser" even shows a green check suggesting that the website is safe, once you’ve approved the initial warning message.

I should note, for the sake of clarity, that both the 2013 and the current MITM reports come from greatfire.org, whose owners leave little doubt that they have concerns about the current regime in China. A proper assessment of Qihoo's 360 browser would require it to be downloaded on a sacrificial PC and used to check out websites with known problems in their SSL certificates (e.g. self-signed, out of date, being MITM'd). For extra points you'd download it from a Chinese IP. I don't have the time or spare machine to test this thoroughly, but if anyone does then I'd be interested in the results.

Anyway, if the browser compromise checks out then I'm really not surprised at this development. In fact I'm surprised it hasn't happened earlier, and wonder if there have been parallel efforts at compromising IE/Firefox/Opera/Chrome downloads in China: it would take substantial resources to modify a browser installer to download and apply a binary patch to the downloaded binary which allowed an additional fake certificate authority (e.g. the Chinese government could pretend to be Apple), and more resources to keep up to date with browser releases so that you could auto-build the patch shortly after each new browser version release, but it's at least conceivable. But if you have lots of users of a browser developed by a firm within China, compromising that browser and its users is almost as good and much, much easier.

2014-09-25

Signs that the terrorism threat might be overblown

Or maybe just a sign that the US education system is a pool of sharks...

Modern terrorism getting you down? Don't worry, it's an opportunity for you! Sign up for a certificate in Terrorism Studies!

In the program, you will develop an understanding of terrorism and counter-terrorism. The online program is suitable for students interested in pursuing a career in homeland security at local, state, or federal levels; joining national and international counter-terrorism agencies; conducting research on terrorism in academia; or seeking opportunities in relevant industries.
Presumably it's also suitable for students interested in pursuing a career in terrorism? Or maybe this is an elaborate honey trap by the FBI, but I suspect that a) they don't have the motivation and b) they can't afford to fund the course.

2014-09-06

New clamping down on information in China

Spotted this on a net security research blog yesterday: someone is trying to snoop on the web traffic of Chinese students and researchers:

All evidence indicates that a MITM [man-in-the-middle] attack is being conducted against traffic between China’s nationwide education and research network CERNET and www.google.com. It looks as if the MITM is carried out on a network belonging to AS23911, which is the outer part of CERNET that peers with all external networks. This network is located in China, so we can conclude that the MITM was being done within the country.
To decipher this, readers should note that CERNET is the Chinese network for education and research - universities and the like. The regular Great Firewall of China blocking is fairly crude and makes it practically difficult for researchers to get access to the information they need, so CERNET users have mostly free access to the Internet at large - I'm sure their universities block access to dodgy sites, but to be fair so do Western universities. What's happening is that someone is intercepting - not just snooping on - their requests to go to www.google.com and is trying to pretend to be Google.

The reason the intercept is failing is because Google - like Facebook, Yahoo, Twitter and other sites - redirects plain HTTP requests to its homepage to a HTTPS address, so most people bookmark those sites with an HTTPS address. Therefore the users were requesting https://www.google.com/ and the attackers had to fake Google's SSL certificate. Because of of the way SSL is designed, this is quite hard; they couldn't get a reputable Certificate Authority to sign their certificate saying "sure, this is Google" so they signed it themselves, much like a schoolchild signing a note purportedly from their parent but with their own name. Modern browsers (Chrome, Firefox, modern versions of IE) warn you when this is happening, which is how the users noticed. The Netresec team's analysis showed that the timings of the steps of the connection indicated strongly that the interceptor was somewhere within China.

The attack doesn't seem to be very sophisticated, but it does require reasonable resources and access to networking systems - you've got to reprogram routers in the path of the traffic to redirect the traffic going to Google to come to your own server instead, so you either need to own the routers to start with or compromise the routers of an organisation like a university. Generally, the further you get from the user you're intercepting, the greater your resources need to be. It would be interesting to know what fraction of traffic is being intercepted - the more users you're intercepting, the more computing resource you need to perform the attack because you've got to intercept the connection, log it, and then connect to Google/Twitter/Yahoo yourself to get the results the user is asking for.

The attempted intercepts were originally reported on the Greatfire.org blog which observes that there were several reports from around CERNET of this happening. Was this a trial run? If so it has rather blown up in the faces of the attackers; now the word will circulate about the eavesdropping and CERNET users will be more cautious when faced with odd connection errors.

If the attackers want to press on, I'd expect the next step to be more sophisticated. One approach would be SSL stripping where the interceptor tries to downgrade the connection - the user requests https://www.twitter.com/ but the attacker rewrites that request to be http://www.twitter.com/. The user's browser sees a response for http instead of https and continues with an unencrypted connection. Luckily, with Twitter this will not work well. If you run "curl -I https://www.twitter.com/" from a command line, you'll see this:

HTTP/1.1 301 Moved Permanently
content-length: 0
date: Sat, 06 Sep 2014 17:23:21 UTC
location: https://twitter.com/
server: tsa_a
set-cookie: guest_id=XXXXXXXXXXXXXXXXX; Domain=.twitter.com; Path=/; Expires=Mon, 05-Sep-2016 17:23:21 UTC
strict-transport-security: max-age=631138519
x-connection-hash: aaaaaaaaaaaaaaaa
That "strict-transport-security" line tells the browser that future connections to this site for the next N seconds must use HTTPS, and the browser should not continue the connection if the site tries to use HTTP. This is HTTP Strict Transport Security (HSTS) and Twitter is one of the first big sites I've seen using it - Google and Facebook haven't adopted it yet, at least for their main sites.

Alternatively the interceptor may try to compromise a reputable certificate authority so it can forge SSL certificates that browsers will actually accept. This would be a really big investment, almost certainly requiring nation-state-level resources, and would probably not be done just to snoop on researchers - if you can do this, it's very valuable for all sorts of access. It also won't work for the major sites as browsers like Chrome and Firefox use certificate pinning - they know what the current version of those sites' SSL certs look like, and will complain loudly if they see something different.

The most effective approach, for what it's worth, is to put logging software on all the computers connected to CERNET, but that's probably logistically infeasible - it only works for targeting a small number of users.

So someone with significant resources in China is trying to find out what their researchers are searching for. Is the government getting nervous about what information is flowing into China via this route?

2014-05-24

Software - Everything Is Broken

I don't agree with 100% of this article, but it's sufficiently true and well explained that it's worth reading the whole thing. Quinn Norton reports that "Everything is broken":

It was my exasperated acknowledgement that looking for good software to count on has been a losing battle. Written by people with either no time or no money, most software gets shipped the moment it works well enough to let someone go home and see their family. What we get is mostly terrible.
This near-perfectly expresses the problem with software. The only point I'd differ on is that it's not even that it "works well enough" - in reality it's shipped when it's perceived to work well enough by people who generally aren't able to tell how well it's actually working.

It's certainly true that people are awful users of software. This is generally because software is written and tested by people who are completely unrepresentative of the software user base. Here's an example from today. I try to connect, using Firebox, to a website which I happen to know has a problem with its security certificate (it's been revoked by the owner). Here's what I get:

OK, so let's suppose that I'm my mother. What the hell am I supposed to do with that information? It's good that Firebox has recognised that the site is broken and has stopped me connecting to this site - but "Please contact the website owners to inform them of this problem"? Seriously? How do I even know who the "website owners" are? Chrome is a little bit better - it warns that "if you try to visit [site] now you might share private information with an attacker" and suggests reloading the site in a few minutes or using a different wifi network, but it says that "something is interfering with your secure connection" when it would be better to say something like "I can't make a secure connection to this website - I've checked a couple of other websites and secure connections to them are OK, so it's probably just something wrong with this particular website". Chrome and Firefox's error messages in this situation are reasonably useful, but they're written for reasonably technically-savvy people - not for the vast majority of their user base.

As Quinn notes, for relatively non-technical people who don't generally have control over their computers, security is essentially impossible:

What's the best option for people who can't download new software to their machines? The answer was unanimous: nothing. They have no options. They are better off talking in plaintext I was told, "so they don’t have a false sense of security."
I think this is slightly pessimistic. Doing everything in plaintext makes it trivially easy for the intelligence agencies, crackers and other ne'er-do-wells to scoop up everything. Better is to ensure that the world uses such a diverse and changing ecology of software and hardware that even concerted efforts to compromise a security system will only yield a relatively small fraction of the world - we can't stop those people from compromising our security if they really want to, but at least we can make the bastards work hard for it.

2014-04-08

A lesson from OpenSSL

If you are paranoid about secrecy on the web, today's news about a bug in OpenSSL may make you feel justified. OpenSSL is an open source library that is used by companies, individuals and governments around the word to secure their systems. It's very widely used for two reasons: 1) a very useful set of licensing conditions that essentially say you're fine to use it as long as you credit the right authors in the source and 2) because so many commercial firms depend on it, its source has been scrutinised to death to spot both performance and functional bugs.

A one-paragraph primer on SSL (Secure Sockets Layer): it's the method by which a regular web browser and a secure web server communicate. You're using it whenever the address bar in your browser displays a URL starting with "https:" instead of "http" - so that's your online banking, Facebook, Google, Twitter, Amazon... Most of these secure web servers will be using OpenSSL - there are alternatives to OpenSSL but none of them are compellingly better, and in fact the widespread usage of OpenSSL probably makes it less likely to contain security bugs than the alternatives so there's safety in belonging to the herd.

Anyone who's thinking "aha, my company should avoid this problem by developing their own SSL implementation" or better yet "my company should develop a more secure protocol than SSL, and then implement that!" has not spent much time in the security space.

And yet, someone has just discovered a bug in a very widely used version of OpenSSL - and the bug is bad.

To get some perspective on how bad this is, the Heartbleed.com site has a nice summary:

The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users.
Sounds dire, no? Actually the above description is the worst case; the bug gives an attacker access to memory on the secure server that they shouldn't have, and that memory *might* contain secrets, but the attacker doesn't get to control which area of memory they can read. They'd have to make many queries to be likely to gain access to secrets, and it's not too hard to spot when one small area of the Internet has that kind of unusual access pattern to your server. Even if they make 1000 reads and get one secret, they still have to be able to recognise that the data they get back (which will look like white noise) has a secret somewhere in it. I don't want to downplay how serious the bug is - anyone running an OpenSSL server should upgrade it to get the fix as soon as humanly possible - but it's not the end of the world as long as you're paying attention to the potential of attacks on your servers.

Still, isn't this bug a massive indictment of the principle of Open Source (that you'll have fewer bugs than commercial alternatives)? It's appropriate here to quote Linus's Law, codified by Open Source advocate Eric Raymond and named after the founder of the Linux operating system Linus Torvalds:

"Given enough eyeballs, all bugs are shallow"
or more formally:
"Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix will be obvious to someone."
Unfortunately, the larger and more complex your codebase, the larger the tester and developer base has to be and the longer it takes to find problems...

It's tempting to look at this security alert and declare that Open Source has allowed a critical bug to creep into a key Internet infrastructure component (clearly true) and declare that this can't be the right approach for security. But you have to look at the alternatives: what if OpenSSL was instead ClosedSSL, a library sold at relatively low cost by respected security stalwart IBM? ClosedSSL wouldn't have public alerts like this; if IBM analysis found bugs in the implementation then they'd just make an incremental version release with the fix. But the bug would still be there and would not be any less exploitable for the lack of announcement. You'd have to assume that government agencies (foreign and domestic) would bust their guts to plant someone or something with access to the ClosedSSL team mail, and in parallel apply object code analysis to spot flaws. The flaw would not be much less exploitable for lack of publicity, and would likely be in the wild longer because IBM would never announce a flaw so vocally and so users would be more lax about upgrades.

There are then two lessons from OpenSSL: 1) that even Open Source inspection by motivated agencies can't prevent critical bugs from creeping into security software and 2) that no matter how bad the current situation is, it would be worse if the software was closed-source.

2014-02-23

Apple's SSL bug - better code reviews required

There's a great technical discussion by Adam Langley at Imperial Violet on the inadvertent security hole that Apple introduced to iOS 7 and later versions of OS X. They've released a patch for iOS (which is how people noticed) but are still working on the OS X fix. My sympathies are with Apple despite them being panned for the delay - the fix is straight forward, but building, qualifying, canarying and distributing the desktop fix inevitably takes a while, and if you try to speed up this process then you have a high risk of making things much, much worse.

The effect of the bug is that it allows a certain kind of attack ("man in the middle") which intercepts secure web connections, say from a user on an Apple laptop to their online banking system. An attacker with sufficient access and resources can pretend to the user to be their online banking server, and the user will have no practical way to detect this. However in practice it is very difficult to exploit, and is only really a concern for users who believe that they may be targeted by government agencies or well-funded and persistent private parties; it's unlikely that it will be widely exploited. Modern iOS and Safari users are not a large fraction of internet traffic, even if you only look at HTTPS traffic.

The bug itself is probably only interesting to code nerds such as your humble correspondent, but how it came about is quite telling about how software development works at Apple.

Here's a cut-down version of the offending function:

static OSStatus
SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams,
                                 uint8_t *signature, UInt16 signatureLen)
{
	OSStatus        err;
	[...]
	if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
		goto fail;
	if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
		goto fail;
		goto fail;
	if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
		goto fail;
	[...]

fail:
	SSLFreeBuffer(&signedHashes);
	SSLFreeBuffer(&hashCtx);
	return err;
}
See that third "goto fail;" line in the middle? That's the error. Almost certainly it was the result of a fat-finger in a code editor, it's very unlikely to be a deliberate change. For tedious reasons related to how code blocks work in the C programming language, the effect of the third "goto fail;" is very different to the first two. It isn't tied to a condition, so if the program manages to get past the first two "if" statements successfully (the initial secure connection checks) then it never carries out the third check. When it reaches the end of the code, the result in the variable "err" actually represents whether the first two checks completed successfully, not (as required) whether all three checks completed successfully.

The reason this interests me is that this change made it into an official Apple release without being detected. I claim that if this code change was reviewed by a human being (as it definitely should have been) then anyone paying the slightest attention would have seen the duplicate "goto fail;" line which would have made absolutely no sense. I can fully understand this error not being picked up by automated testing - it's not straight forward to build a test which could cause this particular case to fail - but this is another indicator that Apple are not paying nearly enough attention to developing software reliably. Getting another person to review your code changes is a basic part of the software development process. If it's not being done, or only being conducted in a cursory fashion, you're going to leave your code riddled with bugs. There is no shortage of bad actors waiting for you to make a mistake like this.

I'm really curious about how this got noticed. My money is on someone browsing the code to make an unrelated change, and being drawn to the duplicate line, but that's only speculation.

I've given Apple heat for their sloppy approach to security in the past and I'm concerned that they're not reacting to the clear signs that they have a problem in this area. If code changes to a key security library are not going through human review, they're going to continue to have problems.

2013-12-15

The 2014 Privies

Extremely entertaining - and, in parallel, depressing reading - at Skating on Stilts which has announced the shortlist for the 2014 Privies - dubious achievements in privacy law. Privacy has been getting quite the airing in the past year, which makes the shortlist candidates even more impressive. Please go and vote for your favourite.

While I don't want to unduly influence voting, I feel I must draw attention to some particularly outstanding candidates. First up, President Hollande of France for "Privacy Hypocrite of the Year":

President Hollande called President Obama to describe U.S. spying on its allies as "totally unacceptable," language that was repeated by the Foreign Ministry when it castigated the U.S. ambassador over a story in Le Monde claiming that NSA had scooped up 70 million communications in France in a single month.
Whoops. Two days later, former French foreign minister Kouchner admitted, "Let's be honest, we eavesdrop too. Everyone is listening to everyone else. But we don't have the same means as the United States, which makes us jealous."

For "Worst use of privacy law to protect power and privilege", Max Moseley must be the front runner by a mile:

Mosley himself achieved notoriety in 2009, when the media published pictures of him naked and engaged in a sado-masochistic orgy with five prostitutes. In a move that seems to define self-defeating, Mosley went to court to establish that it was a naked, five-hour sado-masochistic orgy with five hookers, but it wasn't a naked, five-hour sado-masochistic orgy with five hookers and a Nazi theme. He won.

I await the announcement of the shortlist for "Dumbest Privacy cases" with great interest...

2013-10-30

For some needs, the government comes through

There's a lot of anger in America currently about the general incompetence of the federal government, but it's encouraging to see that at least one government agency is actually good at what it's paid to do:

The National Security Agency has secretly broken into the main communications links that connect Yahoo and Google data centers around the world, according to documents obtained from former NSA contractor Edward Snowden and interviews with knowledgeable officials.
Privacy concerns aside, you've got to admire the NSA for actually conducting some good modern communications interception. Someone probably deserves a substantial bonus; he won't get it, of course, because it's a government payroll - he'll no doubt defect to the private sector eventually, or maybe the SVR will make him the proverbial un-refusable offer.

It would be fascinating to know whether the NSA is just tapping links external to the USA (presumably including links with no more than one node in the USA) or have general access to intra-USA traffic. It's also interesting to speculate on the connection between this eavesdropping and Google's move back in September to encrypt the traffic that the NSA seems to have been intercepting. Yahoo still seems to be open, based on a rather inadequate denial from their PR:

At Yahoo, a spokeswoman said: "We have strict controls in place to protect the security of our data centers, and we have not given access to our data centers to the NSA or to any other government agency."
and one has to wonder about Facebook, Apple, Amazon etc.

So congratulations, citizens of the USA - you have a productive and competent government agency! Perhaps you should have put the NSA in charge of healthcare...