[standards-jig] JEP 34 (SASL)

Mike Lin mikelin at MIT.EDU
Tue Aug 20 23:24:37 UTC 2002

> It should be obvious why having the user id might be desirable for 
> authentication.  Why would it be desirable to have it before methods are 
> given?  Because I have top-secret clearance and must use Kerberos but 
> you are just a plain ol' office worker and plaintext is fine for you. 
> The server ensures that I auth using a method appropriate for my clearance.

Standards-compliance concerns aside, this is probably not a bad idea; I
imagine the reason why DIGEST-MD5 (backwards-compatible with HTTP)
doesn't do the same is that it's not easy for HTTP to have a multi-phase
authentication protocol because of its single request-response nature.

If we're going to be going and adding stuff to the authentication
protocol anyway though I frankly wonder if we wouldn't be better off
extending jabber:iq:auth to support all the different nonce and other
security-related fields that DIGEST-MD5 supports. Then we wouldn't have
to write these silly DIGEST-MD5 lexers, which is a pain unless we're
using one of the grand total of 2 SASL libraries. Anyway, that was just
an idle musing, and should not be taken seriously.

> We are not required to use base64 encoding of everything.  Is there a 
> reason that this was chosen other than that IMAP did it?  Is this so 
> that we can support Kerberos in the future?  It makes it difficult to 
> test in the present.

While I agree it would be nice for the tokens not to be opaque, even
DIGEST-MD5 unfortunately does not require (though it does recommend)
that fields such as nonce/cnonce be composed of XML-transportable
characters, so I think we're rather stuck here.

> This is a minor point raised by step 3 in the second sequence above 
> (<sasl:response>base64(bubba708)</sasl:response) and that is whether we 
> want to send bubba708 at foo.org or just bubba708 like we do today.  This 
> becomes a bit of an issue with respect to base64 encoding in that for 
> authentication to happen correctly, we'd have to snag the host out of 
> the stream header (we do this today), unencode the response to get 
> "bubba708" (don't do today), glue it onto the "foo.org" (we do this 
> today), and re-encode the whole mess (don't do today) before handing off 
> to the sasl libs.

I don't really follow why it is necessary to rewrite an encoded value at
any point. Who is it that needs to unencode the node response, append
the domain, and reencode it? Why do they need to do that?


More information about the Standards mailing list