The following text is
copyright 2005 by Network World, permission is hearby given for reproduction,
as long as attribution is given and this notice is included.
Implications
of an improving Internet
By Scott
Bradner
Most of
the Internet has been getting better over the last few years. In much of the world the Internet is
now good enough for all but the most demanding applications. This improvement has been in the
default "best effort" service and has not depended on Internet
service providers (ISPs) implementing fancy quality of service (QoS)
mechanisms. Paradoxically, some ISPs may see this news as a threat to their
future financial health.
There
are a number of research groups currently studying Internet performance. It is not easy to get good data about
Internet performance, as KC Claffy details in one of her talks (http://http://www.caida.org/outreach/presentations/2005/cenic0503/). KC is the main instigator of Cooperative
Association for Internet Data Analysis (CAIDA), an UCSD-based analysis and
research group, is one of the best of the Internet-related research centers
with an amazing program plan
(http://www.caida.org/projects/progplan/progplan03.xml). The CAIDA web site (http://www.caida.org/) contains a
wealth of interesting and important papers and presentations that try to help
people understand this Internet thing.
CAIDA is
not the only group doing research on the Internet. Some members of the physics community have been studying
Internet performance for most of the last 10 years. The International Committee on
Future Accelerators (ICFA) has had working groups thinking about Internet
performance since at least 1997.
The current iteration was formed in 2002
(http://www.slac.stanford.edu/xorg/icfa/scic-netmon/). These groups have published a series of
reports on the state of Internet performance, the latest of which was published
in January 2005. (http://www.slac.stanford.edu/xorg/icfa/icfa-net-paper-jan05/) I'm not sure why the physicists are
studying Internet performance unless it's to figure out if they can use the
Internet to deliver the (very) large datasets that their experiments produce,
in any case, its very good work.
The
report mostly deals with packet loss in data transmissions, with round trip
times and with data throughput between the Stanford Linear Accelerator (SLAC)
and testing points throughout the world.
The set of countries that the testing points are located in represent
78% of the world's population and 99% of the world's Internet users.
The test
results show that by the end of 2003 the packet loss rate to countries with 77%
of the world's population is low enough that voice over IP (VoIP) will work
very well (63.5%) or well enough to be easily understandable (13.8%). This is up from 48.8% in 2001. One example is reliability within the
US -- this has gone down from a
loss rate of over 10% in January 1995 to a loss rate of under 0.5% in January
2004. Improvements were also seen
in reducing round trip times and increasing data transmission throughput.
These
improvements were in the standard Internet "best effort"
service. As Vonage and other
overlay VoIP services have shown, VoIP "just works" to much of the
world most of the time. You do not
have to pay the carriers extra money for better service to make VoIP work well
enough to be very useful. This
fact may be a real threat to the financial well being of carriers that have
been planning on making more money by charging extra for better quality
service -- and that is most of the
telco-based carriers. These
carriers will be forced to try to make money selling a commodity service
(unless more of them purposefully try to mess up their networks to mess up VoIP
as Vonage has claimed that some already do). These carriers could be in for a rough ride.
disclaimer: Harvard claims to not be in the
commodity service business but has not expressed an opinion on carriers that
may be forced to be so -- thus the
above is my own opinion.