The absence of network security

By Scott Bradner
Network World, 08/02/99

"There is no such thing as a secure computer network."

The New York Times said that a week ago and if The Times said something, it must be true. But do protocol and applications developers understand the implications of this?

Almost by definition, computer networks cannot be, in themselves, secure. The aim of computer networks is to facilitate access to computer-based resources. In order to do so, they transport information from one place to another, generally with a user or two somewhere along the line.

Users are a problem in the security world. They forget things like passwords. They get frustrated at the imposition of complex security procedures and circumvent these procedures to make their lives easier. They loan their accounts to friends. And many users think they are underpaid, overworked or underappreciated - as a result, these users are potentially corruptible.

It sure would be a lot easier, securitywise, without users.

Anything you do to make users' lives easier has security implications. For example, if you allow a remote user to access corporate servers, you have to open a door that other remote people may be able to exploit. If you run an e-mail system that can transfer programs or macro-filled documents, you are opening a barn door.

But it turns out that a major problem is the attitude of protocol and applications designers.

In the IETF, we now insist that all working groups keep security in mind as they design protocols. But even in the IETF, security is often reluctantly added at the end rather than designed in from the beginning. I say reluctantly because when I ask why a working group has not yet considered security, I keep getting the response "my customers are not asking for security." It has sometimes been quite a fight to get working groups to seriously worry about the issue.

If it's this hard to get secure protocols within an organization that has made security a specific goal, it seems to be almost impossible in commercial applications development organizations. Features are added to programs seemingly without any thought of the security implications.

This is not going to be easy to fix. Security is hard. Some of the people who would exploit security holes are very smart (if more than a bit immoral). They will find any small chink in the armor - and unless the developer is a real security expert, it is hard to see a chink when programming one in. The problem is made harder because of the easy-to-run exploitation scripts that get widely distributed.

Companies must get security expertise into their software development groups and users must use the resulting security tools. Otherwise, the Internet bubble may burst in a very ugly way.

Disclaimer: Harvard has seen many bubbles come and go, but the above worry about this bubble is mine.