First impression on unpacking the Q702 test unit was the solid feel and clean, minimalist styling.
The IETF at 25: Unfinished business
- — 19 January, 2011 08:31
As I write this, the IETF has been around for 25 years and a few hours. The first meeting started at 9 a.m. on Thursday, Jan. 16, 1986, in San Diego with 21 people in attendance -- a far cry from the most recent meeting in Beijing, which attracted 1,207 attendees.
The Internet we have today, and that most enterprises heavily depend on, is largely a result of IETF technologies, and more importantly, the IETF philosophy of the proper role of the network. The network that sprang from this philosophy is now under sustained attack and the future role of the IETF will depend on how well it responds to this attack.
MORE BIRTHDAY WISHES: Happy 25th Birthday, IETF
I first started paying attention to the IETF in 1988 or 1989 by monitoring various IETF mailing lists. The first meeting I attended in person was the Tallahassee, Fla., gathering in early 1990. (My boss was not willing to spring for the previous meeting in Hawaii -- so it goes.) I go back aways with the IETF, but there are others that go back further -- 20% (four) of the attendees of the first meeting are still active in the IETF.
If there is a key document that was the origin of the IETF's design philosophy it is the 1984 Saltzer, Reed & Clark paper "End to End Arguments in System Design." A dozen years after this paper was published, the IETF published an expanded description of its design philosophy in RFC 1958, "Architectural Principles of the Internet." (You can find pointers to these, and many other relevant documents, by clicking here.)
The IETF has interpreted the "End to End" paper to basically say that the network should not be application aware. Unless told otherwise by an application, the network should treat all Internet traffic the same.
The IETF has defined various ways that an application can ask for special treatment by the network (such as Diffserv, defined in RFC 2474), but generally networks are ways to get bunches of bits (packets) from one place to another.
In brief, this design philosophy has led the IETF to create technologies that can be deployed without having to get permission from network operators or having to modify the networks. This is not to say that IETF protocols have not been impacted by the proliferation of firewalls and network address translators (NATs) -- see RFC 2775. But this design philosophy has also led to an environment where network operators do not get added value from high-value traffic. This is the heart of the network neutrality discussions being played out so loudly in the wake of the recent FCC proposal.
The IETF has developed many technologies that make the Internet function, make the networks that make up the Internet more secure and make the Internet work over ever-faster and ever-changing transport technologies. But the IETF has developed far more technologies that run over the Internet to provide end-to-end functions for you and me to use.
My last column last year (Goodbye Internet, we hardly knew ye?) was quite pessimistic. So is this one, my first column of the new year.
Last year I was worried about what rules regulators and politicians were going to impose on the Internet. This year, my pessimism is focused at a lower level in the protocol stack: I'm worried about what kind of network the network operators will provide for the IETF to build on, for me and you to use, and for tomorrow's enterprises to depend on. The IETF must play a role in the upcoming debate, and you should as well.
Disclaimer: Most of Harvard would be impacted if things turn out wrong but they do not know it yet. Some parts of the university do care quite deeply about these issues but I did not ask them for their opinion before writing this column, so the above is mine.
Read more about pc in Network World's PC section.