I'm driving my forehead into an ever-deepening dent on my desk in despair at the news that
the US Federal Communications Commission has approved
new rules governing net neutrality in the USA. This may seem like the sort of news that a progressive
geek like your humble bloghost would welcome, but it turns out to involve some inconvenient wrinkles.
The EFF, guardians of liberty, were originally cheering on behalf of net neutrality.
Then, 2 days ago, they started to get a little concerned with some of the details being proposed by the FCC:
Unfortunately, if a recent report from Reuters is correct, the general conduct rule will be anything but clear. The FCC will evaluate "harm" based on consideration of seven factors: impact on competition; impact on innovation; impact on free expression; impact on broadband deployment and investments; whether the actions in question are specific to some applications and not others; whether they comply with industry best standards and practices; and whether they take place without the awareness of the end-user, the Internet subscriber.
In essence, the proposed rules for Net Neutrality gave the FCC - a US government agency, headed by
a former lobbyist for the cable and
wireless industry - an awfully wide scope for deciding whether innovations in Internet delivery were
"harmful" or not. There's no way that this could go horribly wrong, surely?
Broadband in the USA
Now, let's start with the assertion that there is an awful lot wrong with broadband provision in the USA currently.
It's a lot more expensive than in the UK, it's almost always supplied by the local cable TV provider,
and in general there is very little if any choice in most regions. See the
broadband provider guide and choose min, max of 1 - there's an awful lot of the USA with monopoly provision of wired
high-speed internet.
The dominant ISPs with high-speed provision are Comcast, AT+T, Time Warner, CenturyLink and Verizon. It would be fair
to say that they are not particularly beloved. Comcast in particular is the target of a massive amount of oppprobium: type "Comcast are " in your favourite search engine, and you get autocompletion suggestions including "liars", "crooks", "criminals".
American broadband is approximately twice the price of British, and you generally get lower speeds and higher
contention ratios (you share a pipe of fixed size with a lot of people, so if your neighbours are watching streaming
video then you're out of luck). As effective monopolies, ISPs were in a very powerful position to charge
Internet services for streaming data to their customers, as last
year's Comcast-Netflix struggle showed - and it ended with Netflix effectively forced to pay Comcast to ship the bytes that Netflix customers in Comcast regions were demanding.
Google's upstart "Google Fiber" offering of 1 Gbps (125 MB per second) fiberoptic service tells a story in itself. It's targeting a relatively short list of cities but has been very popular whenever it opened signups. It has spurred other broadband providers to respond, but in a very focused way: AT+T is planning to offer 1Gbps service, but
only in Google Fiber's inaugural area of Kansas City which is impressive in its brazenness. Other community-based efforts are starting to bear fruit,
e.g. NAP is proposing their Avalon gigabit
offering in part of Atlanta, Georgia. However, most of the USA is still stuck with practical speeds that have not changed noticeably in half a decade. Entrenched cable ISPs have spent plenty of money on lobbyists to ensure that states and cities make it
expensive and difficult for newcomers to compete with them, requiring extensive studies and limiting rights to dig or string
fiber-optic cable to residential addresses.
So there's clearly a problem; why won't Net Neutrality solve it?
The ISP problem
Net neutrality essentially says that you (an ISP) can't discriminate between bytes from one service and bytes from a different service.
Suppose you have two providers of streaming Internet movies: Netflix and Apple iTunes. Suppose Comcast subscribers in rural Arkansas
pay Comcast for a 20Mbps service, easily sufficient for HD streaming video. Comcast controls the network which ends at their customers' home routers, and when it receives a TCP or UDP packet (small chunk of data) from their customers they will look at its destination
address and forward it either to its destination - e.g. a server in the Comcast network - or to one of the other Internet services they "peer" to. Peering is a boundary across which Internet entities exchange Internet data. When data comes back across that boundary with the address of one of their customers, Comcast routes the data to the customer in question. So far, so good.
Now the customer is paying Comcast for their connection, so it's not really reasonable for Comcast to force them to pay more
for more data above and beyond the plan they've agreed. If you've got a 20 Mbps connection, you expect to be able to send / receive 20Mbps more or less forever. Comcast might have a monthly bandwidth cap beyond which you pay more or get a lower speed, but that should be expressed in your plan. Comcast might weight certain kinds of traffic lower than others, so that when 20 people are contending for use of a 100 Mbps pipe traffic which is less sensitive to being dropped (e.g. streaming video) is dropped more often than more sensitive traffic (web page fetches), but that's all reasonable as long as you know how many people you're contending with and what the rules are.
Streaming video is one kind of traffic that's problematic for ISPs: it requires very little bandwidth from the paying customer. They send an initial message "I want to see this video" and then a low volume of following messages to control the video stream and assure the video streaming service that someone really is still watching it. From Comcast's point of view, though, they have a large amount of latency-sensitive traffic coming into their network from a peering point, so they need to route it through to the destination user and use up a large chunk of their network capacity in the process. If lots of people want to watch videos at once, they'll have to widen the incoming pipe from their peer; that will involve buying extra hardware and paying for its associated management overhead so that they can handle the traffic, as long as they are the limiting factor. (Their peer might also be the limiting factor, but that's less likely).
So the more data users stream concurrently, the more it costs Comcast. This can be mitigated to some extent by caching - storing frequently used data within the Comcast network so that it doesn't have to be fetched from a peer each time - and indeed this is a
common strategy used by content delivery networks like Akamai and video streaming firms like YouTube. They provide a bunch of
their own PCs and hard disks which Comcast stores inside its datacenters, and when a user requests a resource (video, image, music file, new operating system image) which might be available in that cache they will be directed to the cache computers. The cache will send the data directly if it's available; if not, it will download it and send it on, but store it locally so if someone else requests it then it's ready to send to them directly. This has the effect of massively reducing the bandwidth for popular data (large ad campaigns, "Gangnam Style" videos, streaming video releases), and also increases reliability and reduces latency of the service from the user's perspective, but costs the provider a substantial overhead (and operational expertise) to buy, emplace and maintain the hardware and enable the software to use it.
The non-neutral solution
If Netflix aren't willing or able to pay for this, Comcast is stuck with widening their pipe to their peers. One might argue
that that's what they're supposed to do, and that their customers are paying them to be able to access the Greater Internet
at 20Mbps, not just Comcast's local services. But Comcast might not see it this way. They know what destination and source
addresses belong to Netflix, so they might decide "we have 100 Gbps of inbound connectivity on this link, and 50 Gbps of that is Netflix video streaming source addresses at peak. Let's reduce Netflix to a maximum of 20 Gbps - at peak, any packet from Netflix video streaming sources has a 60% chance of being dropped - and see what happens."
You see where the "neutrality" aspect comes in? Comcast is dropping inbound traffic based solely on its source address - what company it comes from. Only internal Comcast configuration needs to be changed. From the customer's point of view, Netflix traffic is suddenly very choppy or even nonfunctional at peak times - but YouTube, Facebook, Twitter etc. all work fine. So Netflix must be the problem.
Why am I paying them money for this crap service? (Cue angry mail to Netflix customer support).
Net Neutrality says that Comcast can't do this - it can't discriminate based on source or destination address. Of course,
it's not really neutral because ISPs might still blacklist traffic from illegal providers e.g. the Pirate Bay,
but since that's normally done at the request of law enforcement it's regarded as OK by most.
The problem
The USA has handed the Federal Communications Commission, via the "general conduct" rules, a massive amount of control of and discretion in the way in which
ISPs handle Internet traffic. It presumes that the FCC has the actual best interests of American consumers at heart, and
is intelligent and foresighted enough to apply the rules to that effect. Given the past history of government agencies
in customer service and in being effectively captured by the industries they are supposed to regulate, this seems...
unwise.