[As ever, you can read this on the BBC News website]
We journalists like nothing more than a forthcoming apocalypse, especially when it involves something that most people don’t properly understand. It’s easy to frighten people with talk of ‘superbugs’ or argue that when the Large Hadron Collider is turned on it will create ‘strangelets’ that will destroy the known universe, and even if the stories are more speculation than fact you get a good headline and lots of interested readers.
It has got a lot harder to write this sort of story about the internet over the years as more and more of us are online from home, work or school and have some idea about how the network operates.
You can still get a good ‘internet meltdown’ headline out of projections that show we’re using up all the bandwidth and filling up our network with spam – I’ve done it myself.
But it’s even better if you can focus on aspects of the network’s core architecture that few users ever notice, like the unique numerical addresses assigned to every internet-connected device and the complex mechanisms used to move information between those devices.
And if you’ve got an authoritative report from an international body calling for something to be done then you’re on to a winner.
So when the OECD, the Organisation for Economic Cooperation and Development, announced that we will run out of network addresses in 2010 or early 2011, describing the situation as ‘critical for the future of the Internet economy’ and likely to affect ‘all businesses that require IP addresses for their growth’ it generated a lot of attention on and off-line.
The problem concerns the migration from the current version of the Internet Protocol, version 4, to its more modern and capable replacement, version 6. IP is the glue that holds the loose collection of computing devices that we call ‘the internet’ together. It is one of the most important inventions of the twentieth century and testimony to the fact that when you let engineers build solutions with minimal political or commercial interference they tend to make things that stay working.
IPv4, has changed little since 1981 when it was defined by the Internet Engineering Task Force, and although it has managed to sustain the network as it has grown from tens of thousands of nodes to hundreds of millions it has been clear for over a decade that although the fundamentals are sound the current version has outlived its usefulness.
As well as issues with security, reliabilit and authentication IPv4 makes the process of moving data from one network node to another far more complicated than it needs to be, reflecting the fact that it was designed for a smaller and much simpler network.
It also uses only thirty-two binary digits to number each computer on the network, and while four billion addresses seemed plenty back in 1981 we’re rapidly approaching that number of connected devices. As the OECD notes, ‘there is now an expectation among some experts that the currently used version of the Internet Protocol, IPv4, will run out of previously unallocated address space in 2010 or 2011, as only 16% of the total IPv4 address space remains unallocated in early 2008.’ IPv6 uses 128 bits for addressing, allowing many more addresses than there are stars in the known universe.
This should be enough for most future requirements.
Even though it was defined in the late 1990’s and has many useful features, IPv6 deployment has been very slow: a recent report from US security consultants Arbor Networks shows just how slowly. Arbor worked with researchers at the University of Michigan to monitor IPv6 traffic across networks from 87 US and European ISPs for a year and found that it amounted to 0.0026% of all network traffic.
This is a tiny proportion, but the figures are not quite as depressing as they look. According to an excellent analysis of the current situation from Arbor Networks’ Craig Labovitz there are many unseen ‘islands’ of IPv6 connectivity that don’t register on the survey.
For example larger ISPs are using the new protocols to manage their internal systems, especially if they are cable providers using the latest cable modem standard, DOCSIS 3.0, which has IPv6 support built in.
Labovitz also points out that many hardware providers are already building IPv6 into their systems, even if customers aren’t using it. That means that there will be v6-capable kit in place when companies decide to make the switch and they will have to spend less.
I have an Apple Airport Extreme providing my wireless network at home, and it’s got IPV6, so I am using it on my home network already. There are even websites like whatismyv6.com that will check which protocol you are using.
TCP/IP was invented in 1974, but it took until 1983 for it to be generally adopted as the core protocol for the ARPANET and the networks attached to it, replacing the Network Control Protocol and turning the ARPANET into just one component of the emerging Internet – that’s ‘Internet’ with an uppercase ‘I’, of course.
So it’s hardly surprising that the transition to IPv6 has taken a while. As any programmer working on a new version of an existing product will tell you, the effort required to ensure backwards compatibility is often far greater than that needed to get the new features working.
And of course the IPv4 internet is a remarkably resilient and robust network, with many talented engineers who devote their efforts to solving the problems that the net’s unexpected growth has created. They’ll ensure we avoid the internet meltdown, for which we should all be grateful, but that in itself will reduce the pressure to move to the new, improved version of the Internet Protocol.