Copyright 1997 Nikkei Business Publications,Inc. all rights reserved, permission is hearby
given for reproduction, as long as attribution is given and this notice is
included.
Should the Internet be like McDonalds?
By: Scott Bradner
The Internet is evolving into a general data
infrastructure. In the future this infrastructure will have to support a wide
variety of functions. These functions span the range from email between casual
acquaintances to business transactions worth billions of yen, from the
asynchronous delivery of junk email to real-time, interactive, video
conferences. One of the most difficult questions facing the developers of the
technology that will be needed to support this future network is just what
functions the users of the network will actually need. Many observers have
assumed that one of these functions will be an ability to control the quality
of service (QoS) of the network. But the assumptions that many people have had
concerning the required kinds of QoS may prove to have been incorrect.
There are many different definitions for QoS
in data networking the most basic evolved with the concept of a Service Level
Agreement (SLA), common in many IBM mainframe organizations. This definition
says that QoS is the ability to predict or control the system response time
under specific conditions. In this case, the system includes the network
connections and the computers at the ends. Many people who would like SLA-like
QoS in the Internet assume that the QoS applies to an instance of an
application. In other words, the system response time of an individual user
running an individual application. A further assumption is that the manager can
control the delivered QoS individually for each user of each application. For
example, the network manager can configure things so that a company president
would get a faster response time than a clerk would when running the same
application, even when there are many clerks running the application when the
company president tries. Applying this type of QoS to the Internet may not be
easy.
Ever since the decision in the mid 1960s to
use a datagram protocol for the ARPANET there has been a spirited discussion
over how to support quality of service controls in TCP/IP. QoS has been a
concern from the very beginning of the development of IP. The IP datagram
header includes "type of service" bits which put there in the
anticipation of instituting a scheme of routers using these bits to determine
the relative priories of each datagram. The techniques for using the type of
service bits had never been worked out, and there are other problems doing QoS
in the datagram-based Internet.
In a connection-oriented protocol, such as
IBM's SNA, the path that the data takes through a network is pre-defined. If a
path is fixed at the start of a session and does not change during the session,
it is easy to see how one could configure the devices (routers or switches)
along the path to guarantee specific levels of service when handling specific
data flows. But in datagram protocols there is no fixed path because each
individual datagram is handled separately by the routers. The routers forward
each datagram by whatever path seems best at the instant that the router is
dealing with the datagram. Because the "best" path can change
frequently, the datagrams often wind up taking different paths through the
network. It gets far harder to provide support for any sort of quality of
service controls when you can not predict what path a data flow will take. Some
commentators, mostly from traditional mainframe-based computing environments or
with a telephone background, have for many years asserted that support for QoS
is impossible in a datagram-based network and that the Internet must evolve
into using connection-oriented technology such as ATM before any sort of QoS
support would be achievable.
The IETF Resource ReSerVations Protocol
(RSVP) attempts to solve the problem of changing paths by periodically sending
reservation messages. If a path through the network changes the reservation
message will follow the new path and a reservation for differential datagram
handling will be installed in the routers along the new path. A problem can
occur if the routers along the new path do not have enough reserve resources to
honor the new request. In this case the user will get a spontaneous reservation
rejection message. But this is a better scenario than to simply terminate the
session if a path change occurs. RSVP is designed to support the instance of
application model of how QoS should work. But another view is starting to emerge.
It has been a common understanding that the
Internet needed QoS controls which would permit a specific group of people to
perform some specific function at a specific time. An example of this is a
group of people engaging in an interactive video discussion where the transport
of the video and audio are carried in IP datagrams over the Internet.
Interactive video requires a reliable network with quite low system latency, in
the order of a few 100 milliseconds along with an ability to transport relatively
large amounts of data, a few 100 K bytes per second. Any data that is lost or
delayed in the network lowers the fidelity of the sound or image. I am sure
that some of the network users desire this type of function (although it is not
clear to me that doing this sort of thing over a common infrastructure is cost
competitive with other ways of performing the same task.) but it may be the
case that users want consistent, predictable network service rather than being
focused on dealing with a few specific applications being run from time to
time.
During the recent IETF meeting in Memphis
Tennessee the transport area in the IETF held a birds of a feather (BOF)
session dedicated to trying to understand what the network users see as their
need to for differentiated services in the Internet and in their own internal
TCP/IP networks. There was one person who spoke during the BOF that wanted the
ability to sell a dedicated bandwidth path between two customer sites to
replace point to point leased data lines, but the rest of the speakers asked
for "predictable service quality." They wanted their users to
experience the same system response time tomorrow that they got today. Just
like a consumer knows what to expect when they go into a McDonald's restaurant
anywhere around the world, these speakers wanted their users to know what to
expect. They specifically did not request that this be done on a per
application basis. They wanted all of the functions their users perform over
the network to exhibit consistent system performance.
They also did not ask for fast system
response, just predictable. This makes sense in terms of human psychology. If
people get good performance sometimes in a system they will expect to always
get the good performance and be disappointed when they do not get it. In
psychology this is called "random reinforcement". If a person is
rewarded every time they perform some action, they will stop the action as soon
as the reward stops. But, if the reinforcement is given randomly when the action
is performed the person will continue to perform the action, expecting to be
rewarded sometime soon. (Note that this is the same urge that makes people
gamble, even though they only get rewarded from time to time, continue to
gamble because they expect to be rewarded each time.) If the reward is
occasional faster system response time, the users will expect that faster
response and thus will be disappointed and unhappy with the network whenever
they do not get fast response. Note that in this context ,system response time
may have to be slowed down when it is too fast so as to not cause undue
expectations.
What does that mean for the development of an
Internet that can be used for normal, day to day, business use? It probably
means that it is more important to work on the development of technologies that
can support a few different levels of reliability on the network rather than
focusing exclusively on technologies that support the per instance of
application QoS controls.
A number of equipment vendors are working in
this area and trying different approaches to the problem. The basic aim of most
of the efforts is to create the technology to permit the ISPs to offer a
"business class" of Internet service.
Business class Internet service would still
be connection-less, datagram TCP/IP, but it would add the ability for an ISP to
mark datagrams as they enter into the ISP's network and then prioritize the
handling of the marked packets in the routers in the middle of the ISP's
network. The ISP would guarantee that some specific percentage of their network
capacity would be always available to support the business class service and to
not over sell that capacity. The ISP ensures the guarantee by controlling the
amount of traffic that gets marked as business class. In this way the ISP, and
their other customers, are safe even if a customer violates the agreement on
the maximum amount of business class traffic that they are permitted to send.
Arbitrarily complex rules can be used to
determine what datagram gets marked as business class but I expect that, at
least in the early stages, that very simple rules would be used. For example,
all traffic to and from the corporate web server would be marked for business
class processing.
With this sort of capability and ISP could
offer to its customers network service with a generally predictable QoS as long
as both ends of the conversation were connected to the same ISP. Things get
much harder when attempting to get more than one ISP working together on things
like this.