The following text is copyright 2000 by Network World, permission is hearby given for reproduction, as long as attribution is given and this notice is included.

Recycling failed technologies?

By Scott Bradner

I got a promo email from John McQuillan the other day. It was pushing his Next Generation Networks conference at the end of October in Washington DC and was titled "Rethinking Routing." But the gist of the message seems to me to reflect more a fondness for technologies past than a rethinking.

According to John, the reason that it may be time to rethink Internet routing is the pending explosion in the availability of optical networking. Wave division multiplexing (WDM), dense wave division multiplexing (DWDM), optical multiplexers and all-optical cross-connects are being deployed or about to be deployed over the rapidly increasing web of optical fibers crisscrossing the country. Fiber, as a FCC official noted, that is being deployed on a per fiber bases faster than the speed of sound. This new network will have the ability to be reconfigured in real time and that opens the door to new technologies that could reroute IP traffic in response to congestion in the network. The multi-billion dollar prices that have recently been paid for companies in the optical networking field, even those companies without any real products, shows that John is not alone in his thinking of the importance of this area.

I don’t disagree with the above (other than to marvel at the prices) but do wonder if John's leap from the ability to do agile networking on optical networks to the usefulness of doing much of it is backed up by the needs of the network of the future. I come down to the same issue that has kept me from endorsing a number of "advances" in Internet technology over the past 10 years. Most of these advances are trying to remake the datagram-based Internet into a circuit-based clone of the phone network. But Internet traffic has little in common with phone traffic, even Internet-based phone traffic may have little in common with traditional phone traffic.

Circuit-based technologies, such as ATM, SONET and MPLS, are used in large US-based ISPs these days to balance traffic between pairs of cities. Doing this type of balancing on a real-time bases may become useful in the future but I question getting too much finer in granularity than city pairs, or maybe a few aggregate QoS classes between city pairs. Since there are not all that many cities where the big ISPs have points of presence this does not amount to all that many circuits. Not nearly enough to justify the hype that is going around.

This view is counter to John's and many other pundits who feel that the new ability to be agile will spawn lots more circuits in the Internet. But it is consistent with Internet history, which has weathered many other attacks on the utility of datagram-based networking. Determining the real trends will take some time and if I'm wrong I'm sure someone will remember, I'm sure John will.

disclaimer: I'll be chairing a session or two and giving a tutorial at NGN but as far as I know Harvard will not be attending and the above observation is my own.