The fragile network

[As ever, you can also read this on the BBC News website. (28/1/07 – it was also picked up by CircleID]

One of the more persistent founding myths around the internet is that it was designed to be able to withstand a nuclear war, built by the US military to ensure that even after the bombs had fallen there would still be communications between surviving military bases.
It isn’t true, of course. The early days of the ARPANET, the research network that predated today’s internet, were dominated by the desire of computer scientists to find ways to share time on expensive mainframe computers rather than visions of Armageddon.
Yet the story survives, and lies behind a generally accepted belief that the network is able to survive extensive damage and still carry on working.
This belief extends to content as well as connectivity. In 1993 John Gilmore, cyberactivist and founder of the campaigning group the Electronic Frontier Foundation, famously said that ‘the net interprets censorship as damage and routes around it’, implying that it can find a way around any damaged area.
This may be true, but if the area that gets routed around includes large chunks of mainland China then it is slightly less useful than it first appears.
Sadly, this is what happened at the end of last year after a magnitude 7.1 earthquake centred on the seabed south of Taiwan damaged seven undersea fibre-optic cables.
The loss of so many cables at once had a catastrophic effect on internet access in the region, significantly curtailing connectivity between Asia and the rest of the global Internet and limiting access to websites, instant messaging and email as well as ordinary telephone service.
Full service may not be restored until the end of January since repairs involve locating the cables on the ocean floor and then using grappling hooks to bring them to the surface so they can be worked on.
The damage has highlighted just how vulnerable the network is to the loss of key high-speed connections, and should worry anyone who thought that the internet could just keep on working whatever happens.
This large-scale loss of network access is a clear example of how bottlenecks can cause widespread problems, but there are smaller examples that should also make us worry.
At the start of the year the editors of the popular DeviceForge news website started getting complaints from readers that their RSS feed had stopped working.
RSS, or ‘really simple syndication’, is a way for websites to send new or changed content directly to user’s browsers or special news readers, and more and more people rely on it as a way to manage their online reading.
The editors at DeviceForge found that the reason their feed was broken was that the particular version of RSS they were using, RSS 0.91, depended on the contents of a particular file hosted on the server at www.netscape.com.
It looks as if someone, probably a systems administrator doing some clearing up, deleted what seemed to be an unneeded old file called rss-0.91.dtd, and as a result a lot of news readers stopped working.
Having what is supposed to be a network-wide standard dependent on a single file hosted on a specific server may be an extreme case, but it is just one example of a deeply-buried dependency within the network architecture, and it surely not alone.
This is going to get worse. The architecture of the Internet used to resemble a richly-connected graph, with lots of interconnections between the many different levels of network that work together to give us global coverage, but this is no longer the case.
The major service providers run networks which have few interconnections with each other, and as a result there are more points at which a single failure can seriously affect network services.
There are may even be other places where deleting a single file could adversely affect network services.
If we are to avoid these sorts of problems then we need good engineers and good engineering practice. We have been fortunate over the years because those designing, building and managing the network have cared more for its effective operation than they have for their personal interests, and by and large they have built the network around standards which are robust, scalable and well-tested.
But we need to carry on doing this and make things even better if we are going to offer network access to the next five billion users, and this is getting harder and harder to do.
In the early days the politics was small-scale, and neither legislators nor businesses really took much notice, but this is no longer the case as we see in the ongoing battles over internet governance, net neutrality, content regulation, online censorship and technical standards.
Bodies like the Internet Society, the International Electrotechnical Commission and the Internet Engineering Task Force still do a great job setting the standards, but they, like the US-government appointed ICANN, are subject to many different pressures from groups with their own agendas.
And setting technical standards is not enough to guard against network bottlenecks like the cables running in the sea off Taiwan, since decisions on where to route cables or how the large backbone networks are connected to each other are largely made by the market.
The only body that could reasonably exert some influence is the International Telecommunications Union, part of the UN. Unfortunately its new Secretary-General, Hamadoun Toure, says that he does not want the ITU to have direct control of the internet.
Speaking recently at a press conference he said ‘it is not my intention to take over the governance of Internet. I don’t think it is in the mandate of ITU’. Instead he will focus on reducing the digital divide and on cyber-security.
These are worthy goals, but they leave the network at the mercy of market forces and subject to the machinations of one particular government, the United States. If we are going to build on the successes of today’s internet and make the network more robust for tomorrow we may need a broader vision.

Bill’s Links

Gilmore quote (and others):
Asian quake and effect on the net
RSS: Netscape

3 Replies to “The fragile network”

  1. Hi,

    Even if every precaution in the world is taken, the topology of the net may be a weakness.

    In the early days, I presume that the internet was like a random network, which means that there were no nodes with much more links then others. Sites where linked in a way that resembled a random choice.

    Nowadays, I would say that a few nodes (hubs) link the majority of sites. It is what is called a scale-free network.

    The problem is that according to complex networks theory, a random network is more resilient than a scale-free networks.

    If you attach a hub, the net will likely fall apart. While, in a random network, the failure of a single node in fact poses little risk for the net.

  2. The Internet was designed to be resilient and fault tolerant, not survive a nuclear war. That said, the people who designed it worked for DARPA, which is a part of the Army, doing original research. They were the ones who came up with the way the Minuteman Missile systems were designed in the 60’s to have the same characteristics, where the loss of one connection will not disable the entire network, but alllow for routing around the fault.

    If you want the true story, which is more interesting than the myth, read the book “Where Wizards Stay Up Late”.

Comments are closed.