Copyright 1997 Nikkei Business Publications,Inc. all rights reserved, permission is hearby
given for reproduction, as long as attribution is given and this notice is
included.
The future of Internet backbones.
By: Scott Bradner
The Internet is very much a victim of its own
success. There are now at least 13 million computers connected to the Internet.
Internet traffic is growing far faster than anyone could have imagined just a
year ago. The Internet is seen more and more as a key piece of national and
international infrastructure. The advent of Internet-based commerce has the
potential to maintain this growth almost indefinitely. This explosive growth
has strained the Internet infrastructure to the point that people like Bob
Metcaffe have started claiming that the Internet is due for a catastrophic
collapse in the near future. He has said that even if the Internet as a whole
does not collapse, then major Internet service providers (ISPs) may be in real
danger of periodic total failure.
Any way you look at it, the Internet
infrastructure is in current or potential trouble. Internet traffic at
MAE-East, a inter-ISP connection point located near Washington DC, now has
daily peaks of over 750Mbps bits per second Traffic on some backbones is
increasing by 30% per month. The size of the routing table that the backbone
routers must be able to support is now over 36,000, and more importantly, the
data in the routing table can change as often as 110 times per second. The
fastest of the current generation of routers can support interface speeds of up
to 155 Mbps and this is not fast enough to keep up with the existing demand
never mind the future demand. (The ISPs that are advertising themselves as
having backbones faster than 155Mbps break down the faster links into multiple
parallel links, each with a maximum data rate of 155 Mbps.) Routers with faster
ports are promised but not yet here.
There is not enough fiberoptic cable in the
ground to support the growing demand and it takes a long time to install more.
New technologies such as Wave Division Multiplexing which can be used to expand
the capacity of fiber links but they do so by adding additional parallel
circuits on the same fiber, this means still more router interfaces will be
needed.
Currently, the total market for
backbone-scale routers is limited by the fact that there are not many very
large Internet backbones and by the fact that even the largest of the corporate
networks do not present the same level of demand on the routers. The limited
potential revenues and the cost of development have meant that here are only a
few vendors servicing this market.
The interconnect points such as MAE-East are
severally stressed and are seen by many observers as the weakest links in the
Internet Infrastructure. Most of the larger ISPs are now bypassing these
interconnect points in favor of private direct ISP to ISP connections.
I have painted an unpleasant picture of where
we now are in the development of the Internet and a potentially bleak outlook
on the future but I am not yet done being negative.
There are a number of people, mostly people
with telephone company or mainframe backgrounds who think that if ISPs just
start charging for their services in a different way everything will be
magically fixed. Most of these people think that if Internet users were charged
a usage-based fee then they would change their behavior to minimize their cost.
This, tied to the ISPs starting to charge each other a usage-based fee to
exchange traffic is, according to these pundits, supposed to moderate the
traffic growth. I do not think this is true for two reasons. First, there is
currently no way to record the traffic of an individual user in a way that can
be used by an organization to give feedback to that individual in the same way
that a bill for long distance telephone service can identify each originating
telephone extension. In the case of a telephone bill a manager can talk to an
employee who talks too much. Second, it is very hard for an Internet user to
know in advance what traffic some action will produce. Clicking on one web link
might cause a 10KB file to be read, while clicking on the link beside it could
cause a 10MB file to be sent.
Enough pessimism. The Internet has a great
future we just have to figure out how do things so that we can enjoy it when it
comes. Notice that I did not include running out of IP addresses as one of the
looming problems. There is less of a problem in this area than there once was.
There are a number of reasons. The use of classless inter-domain routing (CIDR)
has enabled more efficient address assignments that more closely match the
actual needs of organizations than the old class A. B & C addressing did.
As long as addresses are assigned following the actual topology of the
underlying network then CIDR addresses can be aggregated in a way that reduces
the size of the backbone routing tables. The use of address translating
firewalls and network address translating (NAT) boxes permits an organization
to use RFC 1918 private addresses within their own network and minimize the
size of the address block they need for external visibility. Finally, IPv6, with
its very large address space will relieve any remaining scarcity as it starts
to get deployed.
I have outlined both technology and policy
challenges that must be faced. On the technology side there are a number of
promising developments. The above mentioned address translation technologies
will be of great help in moderating the growth in the size of routing tables
and thus reducing the processing load on the routers. In and of itself keeping
the routing table growth small is not sufficient since the routing
computational load is a combination of the size of the table and the frequency
with which that table must be recalculated. Many of the ISPs are currently
experimenting with forms of "route damping", in which small fast
changes in the routing information are delayed and summarized to reduce the
change frequency. The combination should be able to keep the route table
processing load growth under the rate of growth in the power of the processors
in the routers. This will be the hardest problem to solve to keep us on track
for the bigger Internet that is on its way, but with discipline it can be
solved.
Part of this discipline is that sites have to
renumber their computers when they switch ISPs. One needs to renumber when
moving in the network topology so that your address can represent your actual
position in the network and be aggregated with the addresses of others. This is
needed to slow the growth of the routing table. The use of Dynamic Host
Configuration Protocol (DHCP) makes renumbering far easier and the advanced
renumbering abilities in IPv6 will make it easier still. Alternative addressing
architectures are being explored for IPv6 that could cause the issue of
renumbering to become moot because it will be so easy to renumber a site
without anyone having to reconfigure routers or hosts. The use of NAT boxes or
address translating firewalls can also make the difficulty of renumbering a
thing of the past.
We are going to need bigger and faster
routers. Routers that can support individual interfaces in the gigabit per
second range. But these are being developed now and should be on the market
soon. We are going to need WDM to be able to create new bandwidth on existing
fiberoptic cables, and the WDM development is well under way.
But I think there is one other thing that
will be needed: differential services. Right now ISPs can only offer one
product, best effort IP packet delivery. In fact the best level of guarantee
that a customer can get from many ISPs is that they agree to accept your
packet, they make no claim that they will do anything useful with the packet,
but they will accept it. I think that ISPs need to start offering at least two
levels of service. In one level a user will get the current "best
effort" service which is subject to packet loss due to congestion. The
second level of service, which would be more expensive, would include a
guarantee that this part of the service would not be oversold and that the ISP
guarantee to deliver the packet not just accept it. The delivery might be to
another customer of the same ISP or it might be to another ISP which might or
might not have an equivalent guaranteed delivery service. Thus implementing
differential services is not just a technical issue, the IETF's integrated
services work may be able to be used here, but also a policy and ISP
relationship issue. This is one place that ISP to ISP payments may actually be
reasonable. I think that differential services are needed because I do not
think that the ISPs can keep ahead of the traffic growth curve for best effort
delivery and the result will be an Internet that will get more and more
congested, making it an unsuitable vehicle for the support of much of the
projected Internet-based commerce. With differential services someone ruining a
commercial web site might be able to pay more to ensure that their customers
get reliable connections.
From a technical and business point of view
I'm optimistic about the future of the Internet. Things get a bit trickier when
contemplating the impact of governments trying to control or tax the contents
of the transmissions over the Internet.