[standards-jig] JNG Ramblings.

Mike Lin mikelin at MIT.EDU
Mon Aug 12 07:51:50 UTC 2002


> If the four-orders-of-magnitude increase in both storage medium and in 
> network bandwidth we have seen over the last decade continues into the 
> future, a 32-bit 'size' field will become insufficient. If that happens, 
> how would we upgrade a protocol like the one Mike Lin proposed @ 
> http://mikelin.mit.edu/xmpp/jng/ ? In a similar vein, look at the 
> evolution of Microsoft's (V)FAT filesystem.

I was thinking about this today. The protocol chooses 32-bit fields not
because I think 32-bit is an immortally high limit on message size, but
because it is so easy to work with on 32-bit platforms. So I was
thinking about what to do once we all have 64-bit platforms, when it is
still easy to work with 32-bit values, but we would like to be working
with 64-bit values.

I really think it would work fine to define an exactly analagous
protocol that operates on 8-byte headers rather than 4-byte headers, and
run it on a different port. Except for higher size limits, the new
protocol would not be different at all. Thus, backwards compatability to
the extent of full fidelity in a server speaking both protocols would be
maintained for payload sizes within the 32-bit limits. Protocol framer
implementations would need to be changed very minimally to support the
64-bit protocol (assuming they get compiled to a 64-bit platform);
eventually, 32-bit implementations would go away.

We admittedly would need to somewhat arbitrarily define how to handle
messages that do not fit into the 32-bit limits sent to someone using
the 32-bit protocol. We could just reject the messages as too large, of
course, or, since the envelope is transparent XML, we could amusingly
imagine having the router chunk the message and rewrite the manifest to
match.

So, my idea is to align upgrades in the protocol with upgrades in
processor architecture. Upgrading processor architectures always causes
something of a sea change in software, and I don't think this is
realistically avoidable to any great extent; it's always bumpy riding in
between. You probably remember having to choose to run Windows in
386-enhanced mode, or trying to get old programs to run in Win95. Most
of you probably remember stuff from even further back that I don't.

In an unusual step for me, I'll insert a non-normative douse of realism
here too. The largest part payload in the 32-bit protocol is 16MB. Let's
say that we were using an XML-framed protocol instead, so that there is
no actual limit to payload size. What are the chances that software
designed today is going to properly handle a 16.01MB chunk of XML
without blocking unacceptably, crashing, or otherwise choking? I think
pretty small. So the non-normative non-argument I'm trying to make is
that somewhere down the road there are going to have to be practical
changes to the implementations anyway, and inserting a slightly
different protocol at some later date is not really such a disaster,
especially where backwards compatibility is largely maintained. LDAP did
it, and it was a bit bumpy, but on the whole it didn't cause nuclear
holocaust.

In the meantime, I think we get a lot of benefit from a 32-bit binary
wire framing protocol.

-Mike




More information about the Standards mailing list