Misplaced "trust" in "trustee"

It appears that I was, if anything, too kind about Detroit's pension problem. The outside audit report has been released, and it apparently does not make for pretty reading:

[Pension trustee spokesperson Tina Bassett] said that the trustees were administering benefits that had been negotiated by the city and its various unions and that they had established an internal account to set aside “excess earnings” that would cover the cost. She said it was appropriate for retirees to benefit from market upturns because they had paid into the pension fund, so their own contributions had generated part of the investment gains.
In other words, whenever returns went above projections they promptly gave away the excess. And when returns went below projections, and the huge gap was visible, what did they do?
So much money had been drained from the pension fund that by 2005, Detroit could no longer replenish it from its dwindling tax revenue. Instead, the city turned to the public bond markets, borrowed $1.44 billion and used that to fill the hole.
Even that did not work. In June, Detroit failed to make a $39.7 million interest payment on that borrowing — the first default of what was soon to become the biggest municipal bankruptcy case in American history.
Detroit said at the time that making the interest payment would have consumed more than 90 percent of its available cash.
I'm not sure that there are words in the English language suitable to describe this.

Policy wonk Megan McArdle nails the most likely explanation for this demented behaviour by the trustee:

My best guess is that they were thinking the pensions would have to be paid, one way or another. After all, it’s in the Michigan State Constitution. So they could pay out bonuses, please various constituencies, and then force the city or the state to make them whole when it all came tumbling down. They didn’t reckon with the possibility that the city would simply run out of money, and the state would decline to step in, leaving them with no deep pockets to make up for their mismanagement.
This is only speculation, but I fear it's an all-too plausible motivation for the trustees. I cannot see how a pension fund trustee can see this hole growing steadily in the accounts over many years and yet blithely continue handing out "excess" cash. Whom exactly did they think they were helping?

It appears that Detroit pension trustee spokesperson Tina Bassett used to be Chief Communication Officer for the city of Detroit and was "widely recognized for her innovative and creative development of high-impact communication programs." I'll say; the pension trustees have had quite the impact on Detroit.

Fortunately now the city unions have recognized the severity of the financial situation and are stepping up to the plate to be responsible:

Detroit's bankruptcy judge should allow a state employment panel to reinstate a pension program that gave an extra check to retirees every year using excess earnings, a city union said in court papers.
Oh. Maybe not, then.

There was a lot wrong with Margaret Thatcher's policies, like any politician, but she very accurately anticipated this situation back in 1976:

I think they've made the biggest financial mess that any government's ever made in this country for a very long time, and Socialist governments traditionally do make a financial mess. They always run out of other people's money.
She was talking about the UK Labour government at the time, but the criticism could be perfectly well pointed at Detroit. And probably LA, Washington D.C., San Diego, San Jose,...

I was talking to a Canadian the other day who said that the best option for Detroit was to raze most of it to the ground and just let Canada extend its border a few miles to include the Detroit metropolitan area and airport. Canada gains a handy additional major airport, can move people and industry out of the overcrowded Greater Toronto Area a few hours west to Detroit (provisionally renamed "Harperville"), and the remaining Detroit residents get proper Canadian healthcare and easier access to Tim Hortons coffee and donuts. What's not to like?


Off the shelf software - not a panacea

I read with interest a discussion at Mr. Worstall's place about the forthcoming IT disaster that will be the "Obamacare" (Affordable Care Act) insurance exchanges. Commentator Steve Crook came up with a comment that I thought deserved further attention:

Part of the problem is that software development is still using basic tools and hand crafting everything. Things have improved a lot in the last decade or so, but we're still a long way from the 'engineering' part of software engineering.
You'll know things have changed when it's possible for software can be assembled from a catalog of standard parts and has an MTBF.
I'm 95%+ behind his first two sentences, but the last one deserves more scrutiny, rampant speculation and blatantly biased opinion. What better medium than a blog post to do so?

Software engineering is hard, which is why most programmers don't bother with it. We see the results all around us in ubiquitous IT failures. For the (relatively) few cases where failures really do matter in a reputational, safety and/or financial sense, software engineering really does come into its own. Let's examine those cases to see why software engineering matters and what traps lie in wait for users of off-the-shelf components ("COTS" - commercial off-the-shelf systems). We'll take the Affordable Care Act (ACA) per-state insurance exchanges as an example.

For those unacquainted with the ACA, one of its key aims is to make affordable insurance plans available to the masses. Many people will obtain their health insurance via a scheme with their employer, but this is only available to full-time employees, and such plans are subject to strict criteria on minimum coverage - which is why many USA employers are switching employees to 30 hour weeks in order to make them part-time and avoid the expense of these plans, but I digress. People over 65 or so are covered by the existing Medicare system, and a subset of poor people are covered by Medicaid. Let's assume that for whatever reason there's a large rump of families and single people under retirement who need coverage; how do they obtain it? ACA requires that each state have an insurance "exchange" on which various insurers offer ACA-compliant plans, and the uninsured are required to have coverage or pay a fine.

People unfamiliar with the USA - and I include perhaps 90% of the UK population in that group, the American penetration of TV and cinema notwithstanding - fail to comprehend the importance of the state (not "State") in America. While US states are not homogeneous, there are well-known stereotypes of states which are accurate enough for generalisations. Florida is full of retirees. Everyone in Texas has a gun, even the florists (other than ancestors of Quakers). Minnesota is a mix of Scandinavians and hardcore Islamists. Massachusetts is full of liberals. North Dakota is isolated, snow-blown, and populated by flint-eyed people that would have no problem feeding you into a woodchipper. And so on. The state takes precedence in the USA Constitution ("The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.") and different states have very different laws on employment, welfare etc. A per-state approach in the provision of healthcare exchanges therefore makes a certain amount of sense.

That said, states are still quite large. In many cases they can be thought of as entire countries with populations in the tens of millions. Each state is therefore going to have to implement an insurance exchange which can handle millions of unique users - with reasonably strong authentication requirements, since letting user X have access to the medical records of user Y is a no-no. Since users are disproportionately likely to be poor and poorly-educated, this poses its own challenges. The exchanges will receive traffic at a steady rate (people moving in and out of state, changing work and insurance status) combined with sudden spikes (annual or semi-annual application deadlines). They need to operate at high traffic rates with the relatively small number of insurance providers. You don't want this exchange to be down for any extended period, since when people are looking for healthcare insurance it's often at compressed timescales and under Government mandate.

Why wouldn't you use off-the-shelf software for such a system? Well, in some cases you would. Because of the uptime requirements, this system will need to be distributed - different instances in different physical locations such that a power/network failure (happens all the time) or maintenance period won't take out your entire site, so you'll likely use off-the-shelf replicated database solutions. Because of the authentication and security requirements you'll use off-the-shelf open source crypto libraries like OpenSSL. Because you want your individual hardware platforms to be reliable you'll use an off-the-shelf commercially supported Unix like Red Hat Enterprise Linux. So far, so good - these are all really generic services used by thousands if not millions of customers. They might break, but there's strong commercial pressure for them to a) be really careful about testing updates and b) update whenever they find a critical bug.

The problem comes when you move to a higher level of functionality. The rule of thumb is that the more users of the software while it is being actively maintained and developed, the more reliable the software over time. Software which is badly maintained undergoes a brutally Darwinian process where the bug reports from irate users steadily increase to the point where the remaining developers eventually give up their 96 hour weeks and slink away to other contracts, leaving a fetid mess of software. Open source software, by contrast, can always be fixed by someone sufficiently motivated. Not all users are sufficiently motivated, but for some use cases you can find enough interest for the software to be iterated into moderately robust usability. The problem with the ACA exchanges is that they are a unique application - no-one else is trying to manage a government-supervised health insurance exchange - and they are limited to 50 clients (the US states) which vary from 38M people to 600,000 people in size - nearly two orders of magnitude. What works well for Wyoming and Vermont won't be suited for California and Texas, and every state has different population profiles and healthcare laws.

Worse, since each state will be trying to build its own exchange, you'll likely end up with 3-5 large firms supplying the 50 states with various "tailored" solutions based on their own custom models. Each state will have to cope with exchanging some data with other states, as people move across state boundaries; that's 49 moving targets for the exchange backends to cope with. The ACA restrictions will change year after year, so the exchanges will need to be flexible. Within any one state the exchanges will need to exchange data frequently and securely with the insurance providers within that state, which is going to be the real headache. Finally, in an effort to "prove" the efficiency of the exchanges, more and more reports will be required to be run on exchange activity and members, loading the exchange backends with queries and stressing the access protection mechanisms. This is before we consider the problem of malicious attacks to compromise, overload or denial-of-service attack the exchange front-ends, and the risk of compromised exchange maintainers dumping data out to sell.

In isolation, you can probably find software solutions to each of these problems. The problem will be in glueing together these solutions into a coherent, working and maintainable system. For instance, if you spend 2 months incorporating version X of a data querying system and then the manufacturer releases version Y, what do you have to do to ensure that version Y does introduce insecurities, incompatibilities or performance decreases into your exchange? How do you try out version Y safely? If it doesn't work out, how do you roll it back - bearing in mind that you may have had to reformat your data to be compatible with version Y? All these are system-level problems that your exchange operators need to solve. How do you know that your data storage system actually scales in practice to the number of concurrent users that you will have? Unless another state-level organisation is already using it, it's likely that you have no idea. Your state will be the guinea pig. It's likely that you'll hit any number of bottlenecks in the software and some will be expensive and time-consuming to remove.

The biggest danger is the deadline. There's nothing more prone to cause panic in a software development than an externally-imposed deadline for operation. Software is famously hard to admit estimation of completion times, and so 2 months before the deadline you will probably have no idea if you can hit it. Even if conditions are favourable, it takes an exceptionally hard-headed and technically able project manager to triage appropriately and ensure that developers only work on the aspects of the system and its environment that are crucial to operation. Worse, a government-mandated government-funded project with a government-imposed deadline practically requires the state to throw money at the delivery of the system - this attracts the kind of developers who bill by the hour, anticipating a lucrative few months as they labour away as part of a cast of thousands trying to get the system out of the door. There's no alternative to paying for a new system, so the cost will go through the roof. If you're lucky, the system may approximately work some time after the deadline, but there's definitely no guarantee of this.

Conclusion: the ACA exchanges are going to be one more example of government IT projects than run horribly over-budget and deliver (at best) a barely-working unmaintainable system. It's great news for IT contractors and for large project-managing firms like EDS, Lockheed-Martin etc., but the taxpayers are really going to get it in the shorts.


Soft and hard targets

The ever-enlightened Simon Jenkins in the Guardian has a fascinating insight into how to deal with terrorist attacks:

The modern urban obsession with celebrity buildings and high-profile events offers too many publicity-rich targets. A World Trade Centre, a Mumbai hotel, a Boston marathon, a Nairobi shopping mall are all enticing to extremists. Defending them is near impossible. Better at least not to create them.
Is it just me, or does this sound awfully like "women shouldn't wear short skirts, because it's provocative and makes men want to rape them?" It's a rather odd sentiment, coming from the Guardian of all newspapers.

But let's run with Sir Simon's argument and see where it takes us:

A shopping mall not only wipes out shopping streets, it makes a perfect terrorist fortress, near impossible to assault. There is no defence against the terror weapons of guns and grenades.
That does rather assume that the terrorists can take over the mall in the first place, of course. I invite the gentle reader to consider how far al-Shabaab would have got in a Texas mall. Remember that both the Washington Navy Yard shooter and the soon-to-be-very-ex-Major Nidal Hassan's Fort Hood shooting were only able to carry on as long as they did, and shoot as many people as they did, because both areas were gun-free zones. In both cases, once armed police officers turned up they engaged the gunman and ended up shooting him. From this we can deduce that if you want to stop a determined shooter, having guns and the training to use them is rather important.

You'll never be able to stop a determined shooter from getting off his or her first few shots at innocents. The difference is that in a Texas mall the volume of retaliatory fire will drastically limit the number of casualties, and give the gunman very little time to pick their shots before defending themself from imminent death becomes their overriding concern. For the record, despite the above video, I'd rather the civilians use pistols in a mall - high-powered rifles are probably not the best firearm in a crowded environment with solid flat surfaces everywhere.

As for bombs, it seems that Sir Simon would rather people didn't go to church because it's a near-irresistable target for bombers. I invite the reader to consider where such an approach would lead, and wonder at what an Oxford PPE must do to one's brain, not to mention spine.


Oh FFS Apple

It's another lock screen security breach on the iPhone, this time in iOS 7:

The exploit can be initiated by swiping upwards on the device's lock screen to access the Control Center and open the Clock app. Once the clock app is open, holding the phone's sleep button will cause the "Slide to Power Off" option to appear. Tapping on cancel at this juncture and then double clicking on the home button will open the phone's multitasking screen, providing access to the camera and the photos on the device. The key to the trick, however, is to access the camera app from the lock screen first, causing it to appear in the recently used apps list.
This is far from the first lock screen exploit. Have Apple given up entirely on security testing? They know this is a ripe vector of exploits, and they let this through the gates. As I noted back in February for a previous lockscreen exploit:
What the flaw indicates, however, is that Apple is pressuring phone development and skimping on testing and security. This is not going to be an isolated problem.

The human face of excessive state spending

When you promise future benefits that can't possibly be paid, you end up like Detroit:

"I object to being referred to as a creditor," said retiree Paulette Brown, a former water department employee who got notice of the bankruptcy because her pension is at risk. "What I am is a dedicated public servant … Who's going to prison for the proposed cruelty to retirees?"
Less compassionate commentators make the observation that Detroit was obviously heading down the tubes and future promised income was at risk, but that's a pretty sophisticated observation to make from the point of view of a low-level government employee who believed that her pension money was safe and now is faced with a huge chunk being taken out of it through no fault of her own.

I have slightly less sympathy for Cynthia Blair, on a $3,000-a-month pension from her police sergeant husband. This is $36,000 per annum - and very little, if any, tax taken off that - which goes a long way in Detroit. Even the headline figure in sterling is £22,500, which is a seriously good pension. Perhaps if pensions had been less generous in the past, Detroit wouldn't be in such a mess in the present.

In the end, though, you can't fight the math, as resident Jean Vortkamp discovered:

Jean Vortkamp, got emotional as she described the bleak state of city services. She said the body of a young murder victim remained on her street for five hours before being removed.
"Detroit is not an airline or a cupcake company. We are a family that deserves respect," Vortkamp told the judge.
Well, Ms. Vortkamp, why are the city services so appallingly poor? It's because they have no money. Why do they have no more money? Because 25% of it is already being spent on pensioners and the police and fire department costs have exploded. The city is not getting anywhere near the bang per buck that it used to.

I have no idea how Detroit can be fixed. I have serious doubts it can be fixed. You can't magic new income for the city - profitable people and companies are leaving the city rather than be taxed to the hilt. You can't stop spending on the police force, the murder rate is bad enough already. The federal government won't touch Detroit with a bargepole, since any federal intervention will set a precedent for unbounded claims on the government as other major cities follow Detroit. Razing Detroit to the ground and sowing the earth with salt is rhetorically attractive but glibly skips over the pensioners who had a not unreasonable expectation of adequate provision for their old age and are left with little.

If you want to pin the blame on anyone, I'd start with the mayors of Detroit, especially those since 1960 when the employment and spending really started to get out of hand:

  • Louis Mirani (R) 1957-1962
  • Jerome Cavanagh (D) 1962-1970
  • Roman Gribbs (D) 1970-1974
  • Coleman Young (D) 1974-1994 (yes, 20 years in power)
  • Dennis Archer (D) 1994-2001
  • Kwame Kilpatrick (D) 2002-2008
  • Kenneth Cockrel Jr (D) 2008-2009
  • Dave Bing (D) 2009-present
I would personally drag each of these gentlemen into court, strip them of their financial assets and use the confiscated funds to support affected pensioners. I don't see it making a big difference to the debt mountain, but at least it would make me feel better. It also might just make high-spending politicos in current state and city administrations think twice before glibly promising future money for which they have made no sensible provision.


A free market in censorship

Readers of this blog will be aware of my feelings towards the current Chinese government and their attitude towards suppression of free speech. I do, however, have to give them credit; they have created quite the free market in online censorship tools:

King's dabble in Internet entrepreneurialism has shown that Chinese censorship relies more heavily than was known on automatic filtering that holds posts back for human review before they appear online. The researchers also uncovered evidence that China’s vast censorship system is underpinned by a surprisingly vibrant, capitalistic market where companies compete to offer better censorship technology and services.
If you're running an online business in China, especially if you intend to offer per-user accounts, you have no option but to co-operate with one of the approved businesses which will help you conform to the requirements of the Chinese government in censoring posts, providing information on user identity on demand etc. An object lesson in this came from ex-head of Yahoo!, Jerry Yang when he testified to Congress in 2007 regarding the arrest of journalist Shi Tao following Yahoo! turning over Tao's identity to Chinese officials:
In February, 2006, Yahoo's Callahan had testified that Yahoo did not know why Chinese officials wanted information on Tao. But several months later, a U.S. advocacy group for religious and political prisoners in China published translations of documents sent to Yahoo from Chinese officials stating that Tao was suspected of divulging state secrets. "What those documents say is that, at the very least, Yahoo's Beijing office knew what crimes were being investigated when they were approached by law enforcement in China," says Joshua Rosenzweig
You have to feel at least a little sorry for Yang, who was carried along on a wave on enthusiasm about investment in China without, presumably, being informed of what Yahoo! would be obliged to do for the Chinese government in return. Of course, poor Shi Tao is the one who really got it in the shorts.

But back to modern online business in China. If (heaven forbid) this censorship system was implemented in the UK I can imagine a new body, say the "Online Identity Check Executive", issuing reams of degrees about how censorship should be conducted, appropriate regulations, "best practice" advice and an "Approved Code of Practice" booklet issued annually and consuming several inches of shelf space. The instinct for bureaucrats is to control finer and finer details in order to increase the need for their organization. That makes it all the more remarkable that in China how you satisfy the government is really up to you, and there's an enthusiastic market in tools, systems and people to help you maximise your bang per yuan in your censorship systems:

Companies are free to run their censorship operations mostly as they wish, as long as they don’t allow the wrong kind of speech to flourish. That creates an incentive to find ways to censor more effectively so as to minimize the impact on profitability.
Interestingly, "the wrong kind of speech" seems to focus more on collective action than on isolated "the system sucks" speech. The Chinese government are clearly terrified of an organized rebellion, along the lines of 1989's Tiananmen Square action but more coherent and better planned. The article also notes the censorship rate: about 2 censors per 50,000 users seems to be the minimum for effective censorship assuming the use of reasonably effective tools to pre-screen posts for censor review.

So a certain grudging admiration for the Chinese government in making a blatantly capitalistic approach to maximising the effect of their censorship. Of course, the companies actually providing these tools are enabling the censorship in the first place, but even then they could argue that they are maximising the ability of Chinese citizens to engage online, censoring the minimum number of their posts - after all, manual censor review costs money, so the fewer posts selected for review the better.


Client-side encryption in the cloud

I'm not entirely sure where this post is going to go, so bear with me. I've been thinking about encryption during the past few days, and how it relates to "cloud" services (Amazon Web Services, Google Compute Engine, Microsoft Azure) etc. If you can't trust your cloud storage provider to repel attempted security breaches, whether from government agencies or pirate enterprise, what can you do about it?

First, a bit of background. "The cloud" is commonly perceived as an amorphous blob of computing power, spread across cities / countries / continents as appropriate. The supposed killer application of the cloud is that it removes the need for a company to rent space in several data centers, build out an appropriate security system, build out a load balancing system to handle one or more data centers being down, employ a dedicated set of data center operations and systems staff... instead it can just "rent" 1PB of storage - where one PB (petabyte) = 1024 terabytes (TB), 1 TB is a standard hard disk size - and specify that be split across at least four geographic locations where at least 2 are in Europe and at least 2 are in the USA. That way it can keep four copies of its key information, and (since data centers get periodically taken down for maintenance or via accidents) at least one copy of the information will almost certainly be available very close to a given user, and at least one copy is practically always available somewhere in the world. Job done, right?

Well, not quite. The first problem is that the company's users in (say) the UK and Germany may well be talking to a data center in France for most of the time. That means that their key data is flowing across national borders, and very vulnerable to being tapped by a random bad guy who may or may not represent a government. Since industrial espionage and national wiretapping are uncomfortably close, how can the company protect against this?

For high security data, the problem can usually be solved with SSL (Secure Sockets Layer) communication. The idea here is that the server (in the cloud) and the client (in the company site) negotiate to establish a shared secret key before starting to communicate data. Essential to the security of this approach is that the server can "sign" a request by the client, proving that it knows a secret ("private key") that only the server should know, without giving away to an observer what that private key actually is. So the server has to be provided with a suitable private key as part of the cloud set-up process, and the client needs to be told what a valid server "signature" looks like.

One aspect of encryption that is often overlooked is that, for a user's data to be available in data centers A, B, C and D, whenever the data changes in one data center it has to be copied ("replicated") to all other data centers. Because this often happens after a user's conversation with a data center has finished, this can't be done as part of the user's SSL connection; it has to be managed separately. If you don't encrypt this later communication, a very clever eavesdropper who has determined some information about the structure of your data and messages can eavesdrop on the inter-datacenter communications. This is what Google has announced recently: they now encrypt all traffic between their data centers, to foil such eavesdropping. This way, every "pipe" between the user and any computer on which they might store data is encrypted.

So far so good, but the user's data is sitting on several hard drives scattered across the globe. What if an intruder gains access to one of the machines with this data? Well, currently he or she can read the user's data with impunity. We can try to fix this with encryption; pick a secret key, store the data in encrypted form, then decrypt it as it's read. The problem here is that you have to keep the secret key somewhere, and access it whenever the user wants to read their data; this means, in practice, storing a copy of the secret key in each data center and having a very robust way of checking whether a machine is authorised to know it just before decrypting the data. Now you've just shifted the hacker's problem; he or she has to find the key store and compromise it. This is probably not much harder than compromising the original machine.

The other problem is compelled access; if your cloud hosting company is given a court order to provide some entity with your data then they can decrypt your data at will and hand it over in the clear. So how do you protect your data?

The obvious answer is that you should keep the secret key yourself. There are two flavours of approach here, and each has its problems. The superficially attractive approach is that you pass in a copy of your key each time you want to access your data; because your connection to the server is secure (see above) this is nominally safe. The server keeps your key in memory, reads the encrypted data into memory, decrypts it and sends it back to you in clear text, and then wipes its memory to overwrite the key and clear text. An attacker would have to have access to the machine at the time you are accessing your data, and be able to read the relevant segment of memory to get hold of the key, or alternatively compromise the server software itself and get it to squirrel away copies of keys as they are received; this is significantly harder, but still feasible. So this is better, but far from perfect.

The "bullet-proof" approach is never to send your encryption key to the server at all. Instead you send and receive encrypted data, keeping a copy of the key on your personal machine, and encrypt/decrypt the data as it passes from and to your machine. This is essentially a perfect defence against a compromised cloud provider. One problem, though, is that it negates many of the benefits of cloud hosting; any processing (e.g. indexing) of the data you provide has to be done on your own machine, as the cloud systems will never have access to the clear text to be able to index it. All they are doing is providing a distributed, expensive and slow hard drive for you, which is not worth very much money to most people. And, of course, your machine will now be the target of crackers who will mail you malware and try to get you to browse malware-serving web pages to compromise your system and send them your key, or even key-log your keyboard to catch your key as you enter it.

Worse, if you are responsible for your encryption key, you had better make sure you don't lose it. For a key to be strong (proof against distributed cracking) it has to have a lot of information and hence will tend to be hard to remember exactly. If you forget it, you are screwed - your encrypted data is just wasted space on a hard drive. Perhaps you will use a "password safe" program to store these keys, but then a) you need to trust that the password safe program has not itself been compromised and b) you have to remember the password safe key and keep it safe from keylogging...

All of this goes to show that if you want to keep data in the cloud, and you want it to be secure, it turns out to be a very hard problem if you are defending it against a capable and determined opponent. The best approach seems to be to have a reasonably robust encryption scheme (say, server-side encryption) and accept the risk of hosting company compromise; to defend against this, try not to have data that anyone would actually find interesting enough to try to decrypt.


How public money is spent

Rarely have I seen a more perfect example of the care with which public bodies expend public (tax) money. The Bay Bridge connecting San Francisco with Oakland is a $6.4bn two-span bridge, connected by Yerba Buena Island in the Bay. It has There is already a bike + pedestrian path on the 2km east span (Yerba Buena Island to Oakland). So why not provide a dedicated bike path on the 1.5km west span?

Why not, indeed. Here (from the proponents) is the case for a bike path over San Francisco Bay:

With the east span finally open, planners are already at work on the next mega-Bay Bridge project - a $1 billion-plus makeover of the western span that would include a $500 million hanging bike path.
"I'm sure the bike advocates are going to start agitating for that" soon, Metropolitan Transportation Commission Executive Director Steve Heminger said
I'm sure they are. Since they're not paying for it, why wouldn't they? And who is going to pay for it?
Drivers are already paying up to $6 at peak hours to cross the Bay Bridge. Redoing the western side to include the bike path would probably mean "putting something in front of the voters," - like a "temporary" $1 hike in bridge tolls, said MTC spokesman Randy Rentschler.
Let's do the maths here. Over a 40 year lifespan, assume that the $500M cost is approximately tripled by maintenance (5% per year, which seems optimistic if anything). Looking at weekday traffic of ~ 250 days per year, that's 10,000 days of cycle traffic. Divide $1.5bn by 10,000 and you see a $150,000 / day cost of the bike lanes. Assuming - in a fit of optimism - that 10,000 bikers per day decide to bike to Oakland, over the bridge and along to their office in (notably steep and traffic-snarled) San Francisco, that's $15 per biker per day. Are they going to charge that to bikers? My arse, they are. This, as Rentschler notes, will be going straight onto the tolls of motorists (as they are a captive audience) who will reap less than zero benefit from this extension. But the San Francisco politicos have a Green source of campaigning dollars to appease, and they're in no danger of being voted out, so why wouldn't they do this?

This is a classic destruction of wealth. That $1.5bn of money could easily be invested into a project with a better social pay-off - or even, horror, returned to the voters to spend in lowered state and city taxes. However the bike path is an irresistably visible and "green" capital project for egocentric politicos to attach their name to. There's no way that it's not going to be built, despite the economic insanity of the project.


Glenn Greenwald veering off the tracks

I've followed the Snowden/Greenwald/Miranda saga with varying degrees of fascination and disgust - the UK government didn't exactly cover itself in glory in how it intercepted Greenwald's partner David Miranda - and so I was intrigued by Greenwald's latest missive in the Grauniad: "US and UK spy agencies defeat privacy and security on the internet". For a start, I'd have capitalised the "i" in "Internet" but I digress... An amateur crypto enthusiast like myself usually finds plenty of groan room in these kinds of articles, so what is Greenwald discussing, and why is he writing in conjunction with Guardian diplomatic correspondent Julian Borger and ex-Wikileaks "data journalist" James Ball?

According to Greenwald, NSA and GCHQ are heavily involved with defeating encryption with three main thrusts:

Those methods include [1] covert measures to ensure NSA control over setting of international encryption standards, [2] the use of supercomputers to break encryption with "brute force", and – the most closely guarded secret of all – [3] collaboration with technology companies and internet service providers themselves.
I can believe [2] without breaking sweat. Brute-forcing encryption should be the stock-in-trade of any serious spy agency. Even if the opposition encrypts their files / hard disk, typically they have a lousy choice of password because they want it to be memorable and easy to type - two aspects directly benefitting a brute-force attack. Any password typed by a human should make a brute-force attack attractive unless that human is particularly crypto-aware. If you can't decrypt by brute force, another approach is to install a keylogger (e.g. via malware mailed to the user) and catch him or her typing the password directly.

Let's talk about [1], though: influencing international encryption standards. The problem with this idea is that many, many very competent cryptographers are involved in the selection of an encryption or hashing standard such as AES or SHA-3. The overall process may be overseen by US agency NIST, but the decision process and factors are, by design, completely open. Usually there's a fairly clear choice of the shortlist of candidate functions; a cryptographically weak function is likely to have a sufficiently obvious record of theoretical attacks that it would be a glaring anomaly for NIST to short-list it. I could just about believe that NSA could influence the choice between candidates #1 and #2 for the winner, but frankly I don't see it buying them much. There's a theoretical possibility that NSA knew of a better-than-brute-force attack against candidate #2 and "persuaded" NIST to choose #2 instead of #1, but I can't see NIST members accepting that; and the downside publicity would far outweigh the relatively small computational gain in near-brute-force attacks.

It's notable that in the peculiar case of the random number generator Dual_EC_DRBG NIST approved four functions. Three were fine, but the fourth - Dual_EC_DRBG - was the only openly NSA-championed one. It was also very slow, had suspicious "magic numbers", and its output was notably way off the level of entropy (randomness) that it should be. Later analysis by Microsoft cryptographers showed something that looked very much like a back door related to the magic number selection. Why would the NSA champion something that was so startlingly broken? We may never know for sure, but it was as popular as a bacon-wrapped pork chop in Mecca. This was the standardisation process working as intended - everyone was well informed to steer clear of this candidate despite ostensible NSA support.

By contrast I invite the reader to consider the advice of the NSA to IBM to change the data values in the "S-Boxes" forming part of the DES encryption process in 1977. Many years later it became clear that the values NSA had proposed made DES substantially stronger against differential cryptanalysis than the original values. It seems probable that NSA knew of differential cryptanalysis techniques back then, and deliberately made DES resistant to this attack. Why might that be?

Every spy agency wants to eavesdrop on everyone. But there's a trade-off. If everyone's crypto is weak, many other foreign spy agencies will be able to do the same thing; if your country (the USA) is one of the most prominent in world commerce, the limited gain to the domestic spy agency from being able to read commercial communications will be dramatically offset by other countries - Russia, China for instance - where government, espionage and commerce have an unhealthy intersection. It's in the NSA's interest to give good crypto to American firms and, by extension, people. They need to leverage their unique advantages (computing power, mathematical excellence) to target the threats that matter.

By contrast, the Chinese Government blocks most uses of Virtual Private Networks crossing the country's electronic borders, and can block or mount man-in-the-middle attacks on SSL (secure web traffic) connections. Why isn't Greenwald shouting about this? Is it because net censorship and blatant interception isn't interesting if it's a Communist country?

In the interest of neutrality, I'd point out that [3] is justified at least in part by previous NSA behaviour with regard to exported crypto. Back in 1997 it turned out that export versions of Lotus Notes made 24 bits of the 64 bits of the encryption key available to the NSA:

When sending e-mail messages, Lotus uses a 64 bit key. But in export editions, 24 bits of the key are broadcast with the message, reducing the effective key length to 40 bits. The 24 bits are encrypted using a public key created by the NSA. This is called the Workfactor Reduction Field. Only NSA can decrypt the information in the Workfactor Reduction Field. Once the key length is reduced to 40 bits, fast modern computers can break the code in seconds or minutes.
The NSA aren't the only ones making exported crypto weak. Witness the sale of Enigma machines after World War 2 to developing countries by the British. "Hey, here's a totally secure encryption system!" Since Enigma had been comprehensively broken by the British, they could read any cipher traffic they chose...

Summary: storm in a teacup. NSA can't negatively effect international encryption standards in practice, and indeed should have a vested interest in these being strong. They totally do brute-force encryption but this only works for very targeted attacks on personally-encrypted files - they can't realistically brute-force regular HTTPS web encryption. They do try to get NSA-specific back doors into exported crypto, but they've been doing this for at least 20 years that we know of. This is not news.


Science is about prediction and refutability

The recent Guardian editorial on climate change was a near-perfect encapsulation of everything I think is wrong with the current "scientific consensus" on climate change. Before I start panning parts of the article though, let me start by acknowledging that it does highlight that current temperatures have diverged from predictions, and that we really don't understand why - it doesn't even fall into the trap of translating post-hoc explanations as fact:

There is, however, a serious debate about why the observed temperatures have not kept pace with computer-modelled predictions and where the heat that should have registered on the global thermometer has hidden itself.
On the other hand, however, it fails to nail just how important is the failure of these past predictions to the climate change debate.

Science is about prediction and refutability. You use your best measurements and scientific theories to determine what you think is going to happen; you then make a public announcement and justification of this theory and associated predictions, and the criteria for measuring their success or failure in the designated timespan. We were fairly clear back in the late 90s that everyone was predicting steadily increasing global temperatures, based on the (objectively measurable) amount of carbon dioxide in the atmosphere and its effect as a greenhouse gas. Since then, reality has not dealt kindly with the predictions. There has been warming in parts of the globe, but not elsewhere; global temperatures have been basically flat over the past decade. There have been spikes and troughs, but nothing sustained. If carbon dioxide levels drive temperature rises, and atmospheric CO2 has been climbing steadily, where is the associated temperature rise that this theory would predict?

If reality does not match your predictions, you have to face the possibility that you do not actually understand the system you are modelling. What a lot of people are missing is that when "scientists" go back to tweak the failed computer simulations so that they then correctly model the past years they are not performing science. At best, it's a sanity test for their adjusted models, but it's not a verification in any form. The only way they can repair their reputation is to start over; produce and justify predictions for the next N years, set out success/failure criteria, and wait. I would say that five years is the minimum period for which we should demand a prediction, and ten years is more like it. Therefore we won't know until around 2020 whether the current theories and predictions are any good. Should we base major economic decisions on these conjectures?

I contrast the article's reports of current climate chaos:

Twelve of the 14 warmest years on record have occurred since 2000; the last two years have been marked by catastrophic floods in Australia and record-breaking temperatures in the US; and the loss of north polar ice has accelerated at such a rate that climate modellers expect the Arctic Ocean to be routinely ice-free in September after 2040.
with the reality of thickening ice in the Arctic blocking the Mainstream attempt to row the Northwest passage:
Severe weather conditions hindered our early progress and now ice chokes the passage ahead. Our ice router Victor has been very clear in what lies ahead. He writes, “Just to give you the danger of ice situation at the eastern Arctic, Eef Willems of “Tooluka” (NED) pulled out of the game and returning to Greenland. At many Eastern places of NWP locals have not seen this type ice conditions. Residents of Resolute say 20 years have not seen anything like. Its, ice, ice and more ice. Larsen, Peel, Bellot, Regent and Barrow Strait are all choked. That is the only route to East. Already West Lancaster received -2C temperature expecting -7C on Tuesday with the snow.”
and the lack of hurricanes in the Atlantic this year:
Seasonal predictions were for an above-normal season. The 30-year average is for 12 storms with winds of at least 39 miles per hour, the threshold at which they are named. Nineteen such systems formed in each of the last three years.
The Arctic ice is a particularly interesting case. It seemed very clear a few years ago that the Arctic ice was getting thinner and less expansive every summer, and an ice-free season seemed like a slam dunk. Now, suddenly, the ice is getting thicker again - a lot thicker, jumping back towards the 1998-2010 average. Why is this happening? We don't know, we didn't predict this. Perhaps there's a lot more about the Arctic that we don't understand.

It seems clear that the climate of the world is changing, and it may even be warming. But the degree, if any, of this warming is far from certain. If the world is indeed warming considerably, it's not clear whether we can (in practice) do anything to affect it. If we can do anything to affect it, it's not even clear that we should do anything - if Arctic sea ice melts completely then suddenly we can send a lot more shipping trade north of Canada, avoiding the bottleneck of the Panama canal. If we can't grown corn in the middle of the USA any more, we may be able to grow it far more north in Canada than we can currently. The current media clamour sounds awfully like the syllogism: "Something must be done! this is something, therefore we must do it."

The more I read of climate science, the more I realise that we know sweet F.A. about it. Confusing political propaganda with science is not helping. Mounting witch hunts against scientists sceptical of current orthodoxy ("denialists!") is not science, it's medieval conformism. If your theory is threatened by a few loudmouths then perhaps the problem is not with the loudmouths. True science welcomes debate and rebuttal. That is, after all, how we learn.