[Standards-JIG] proto-JEP: Smart Presence Distribution

Michal vorner Vaner michal.vaner at kdemail.net
Thu Jun 1 15:06:28 UTC 2006

On Thu, Jun 01, 2006 at 04:48:43PM +0200, Carlo v. Loesch wrote:
> Richard Dobson typeth:
> | How is it not a fact that your spec does not have a form of error 
> | recovery that only needs to retransmit only what failed, rather than the 
> | whole list to the server?
> To the servers or the server? If you are talking about one single server,
> then indeed, should the network be broken it will have to recreate the
> list just for that single server. How many people do you know on jabber.org?
> I have a pretty well distributed roster - people are a bit everywhere.
> Additionally we can use Michals hashes, so we get to keep lists across
> transport errors. Would be interesting to see how often or rare actual
> transport errors happen.
> Michal, how should we handle the hashes. Would you add them to the
> presence smarticast, and do error recovery if they fail, or would you
> burst out all hashes of all lists at connection linkup, no matter if
> you actually need them. The first plan sounds saner to me.

Well, with this discussion, I decided I will write a JEP myself (which
would handle generic smartcast and could be extended to any particular
case to add more saving - so exactly the other way around). I wanted 
to extend the advanced addressing component to store the lists and 
wanted the hash to be part of the JID (the name of the list, like 
md5 at smartcast.server.net).

I need to think of few little details (like, how to save the lists, how
to report errors and everything..)

> | > you still haven't understood the proto-JEP properly.
> | > no matter which part of the broad unicast fails and when, only that part
> | > needs to be repeated or re-established. you don't have to restart
> | > everything from scratch - you only need to redefine a couple of people
> | > on the list of one particular host, if that particular host had serious
> | > delivery problems.
> | >   
> | According to your spec, unless you have updated it very recently when 
> | you get an error you reset the whole list, has this changed so that only 
> | the failed items are resent? how did you accomplish this without acks?
> No as I said the list is updated per host. I thought you were talking
> about all hosts, equalling a complete new "broadcast."
> | But it doesn't, you have to send the whole list (for a particular 
> | server) again, rather than just the single JID at that server that you 
> | were trying to add, you really do seem to be good at spinning facts and 
> | what people are saying, you really should get yourself a job in politics 
> | :P, hehe, but what I said is still true when related to the traffic 
> | between two servers.
> Ok I thought you were talking about *all* servers, because the traffic
> that it takes to update one server's list doesn't really account for much.
> And then you still have the hash.
> | > | "To transmit a message to a select group of recipients. A simple example 
> | > | of multicasting is sending an e-mail message to a mailing list."
> | >
> | > incorrect. why would one invent "ip multicast" if e-mail already did it.
> | >   
> | I see so you are saying *SES SIRIUS AB are liers then, hmm, interesting.
> Well they have a strange notion of multicast, and unfortunately they are
> not alone. But still what's the point in using the word for e-mail?
> Why would you need the word for it then? If the word is reduced to the
> meaning of sending something from here to more then one there. Than
> one-to-many would be equal to multicast. Why say multicast if we already
> say one-to-many. Why abuse the historic meaning of multicast?
> | > this definition is mostly correct, although the intention is that every
> | > packet gets sent to australia only once, so if you know people on 4
> | > servers in australia, xmpp will, no matter which jep we apply to it,
> | > send the packets 4 times towards australia. that's no longer proper
> | > multicast then. it is what we call 'smarticast' at best.
> | >   
> | Just because all xmpp traffic for Australia doesn't get routed though a 
> no, not all xmpp traffic, that would be rebuilding irc.
> one smart router in australia for every context, chosen by the context.
> a context is any multicast sender, like a groupchat or a person generating
> presence.
> | Australia, all the receiving servers in Australia could have an 
> | agreement to share a single multi casting component for international 
> | traffic, now when the server in France tries to discover the multi 
> | casting component for the Australian domains they will all tell it to 
> | use the same one, the server in France when it tries to communicate with 
> | more than one of these Australian servers could realise this and 
> | negotiate a single list with the multi casting component that covers all 
> now we're talking.
> that's the kind of thing we want to bring up in our follow-up
> multicast jep that builds on top of this jep.
> see, it is much more useful to use the word for nothing less than
> the real thing, because it raises your aims to higher goals.  :)
> if you let jep-0033 be multicast, no-one will ever sit down and
> code the real thing.
> | Also there is no such term as smarticast you are just making it up, 
> | whereas an existing definition fits fine as you can see above.
> we made it up because we had too much respect for multicast to
> call this intermediate thing that way.
> | My suggested list generation is not verbose when directly compared to 
> | your method of sending lots of individual presence stanzas to build up 
> ok, this one be valid.
> | the list, and there are not lots of redundant acks, as I showed 
> | demonstrated when initially setting up the list there is only the need 
> | to send a single stanza which will get a single ack, to be classed as a 
> yes, this is certainly better than acking each presence stanza, but
> still you are sending an ack every time you make a change to the list,
> whereas we know the list is healthy because our link is sane, so we
> don't need *any* ack.
> | bunch of acks if would have to be more than one. Also I've got no idea 
> it is a bunch of acks because you have lists for every person on every
> server you are talking to. so even your initial setup will get you acks
> from all involved servers - tcp packets which are redundant because tcp
> itself already ensures the lists are safe from harm.
> | what you are going on about with regards to not needing to do 
> | "broadcast" any more I thought that was the whole point of both of our 
> | suggestions.
> yeah ok it doesn't matter if it still looks like a traditional
> presence fan out, or it looks like a fresh new <iq/> thing.
> | Extra presence stanza??? Not sure what you mean by this either, could 
> | you please explain.
> after setting up iq you need an extra round trip for the ack, then you
> can send the <presence to=router/>, which is also one more than in our
> proposal.
> | But yes it does delay the initial presence delivery, but this will only 
> | happen once as the lists will not need to be rebuilt and will be able to 
> | be re-used whenever they reconnect, I find it very unlikely anyone would 
> same goes for our lists, since you have to apply force to break a tcp
> if both servers take care of closing links gently.
> | if you are referring to the delay that will happen while waiting for the 
> | S2S connections to establish between the domains then that's not really 
> no, referring to the extra roundtrip delay for the <iq/>
> | As already proved by the ack discussion TCP is not stable enough or that 
> | discussion would not have happened in the first place, and there 
> | wouldn't be any of the acking proposals, so again please stop repeating 
> | your assertions that TCP is stable enough because there is plenty of 
> | evidence to the contrary, you continuing to state this just demonstrates 
> | a lack of full understanding of the issues at hand.
> it is the dimensions in which you and i are thinking. i say occasional
> breaks can be fixed, and it is cheaper than acking. you say breaks happen
> all the time, so fixing costs more than acking. i say breaks no longer
> happen all the time, because we enforce that servers stop closing idle
> links irresponsibly. you say, that's not enough. i say, okay, so what
> can we do now to actually find out who's right instead of keeping on
> guessing?
> i have a suggestion. let's make it a negotation option to use acks.
> with acks, the implementation can ignore tcp errors. without acks,
> it will make good use of tcp stability. and the server administrator
> gets to decide which strategy works best. or the implementation
> switches acks on for servers that frequently burst into tears.
> | If its true that on average there are only a few people on peoples 
> | rosters then there is probably no point in even developing this 
> | optimisation in the first place as it will mean it does not make any 
> | real impact.
> exactly, so let's keep on going.
> | > if the amount of stanzas makes a difference though, than a single iq
> | > with a lot of subtags is better than a series of presence stanzas.
> | > please elaborate what the gains are moving away from the traditional
> | > presence fan-out as it has always been, and on the other hand, if
> | > anyone is contrary to the <iq> list setting strategy, please speak up now.
> | >   
> | Well using IQ means not just potential bandwidth savings when setting up 
> | lists (especially large ones as the bandwidth savings will increase the 
> | bigger the list), but it also means its easier for the server or 
> | component to manage the list as it can all be done quickly in a single 
> | operation rather than having to process lots of individual stanzas to 
> sounds like an architectural issue here. would be useful to hear other
> server developers. i know for our part it doesn't make a big difference
> if we add list elements step by step or have it all at once. but in a
> language like c one could even precalculate the size of memory the list
> will take and allocate just the right amount. that does make sense.
> | build up the list, also because XMPP stanzas are not guaranteed to be 
> | delivered in the order they were sent (although on the whole they will 
> | normally get there in order) having the built in acking of IQ means the 
> | sending server knows once the operation has been completed and can then 
> well that again can be solved differently. since the new presence is
> sent to the router, the router will know that its list isnt completely
> processed yet (oh how i love multithreading ;)) and wait until the
> lists semaphore shows green. again for our implementation, such an
> effect could not happen.
> | start sending the presence stanzas to the component knowing that they 
> | will definitely be delivered to the entire list that they set-up, 
> | whereas with your spec there is a chance that if you quickly send a 
> | presence stanza that is intended to be broadcast to the list along with 
> | the set-up stanzas that the broadcast stanza could get processed before 
> | all of the set-up stanzas have finished being processed.
> no, because the presence messages being processed *are* the message.
> and yes, because typically right after setting up the list, the probe
> request comes along. okay so this would be an architectural issue for
> server developers whose servers would not process the input of a single
> tcp linearily. but, does this really happen? do you really process the
> input of ONE SINGLE tcp stream in a non temporal fashion? do i really
> have to take a scenario like this in consideration? how many other JEPs
> would not be able to operate if things aren't happening in the right order?
> i mean, do you get groupchat messages before the MUC has acked your entry?
> aren't there a million scenarios that can't work if you're not keeping
> the order of the stream?
> [deleted the bit about connections and nats and etc, we have differing
> opinions on that and i suggested a peaceful solution to that]
> | connection will never be able to be restored) newer incoming connections 
> | could be fine, but until the original connection times out you don't 
> | know and all of the data sent to the socket in the mean time will be 
> | completely lost.
> if you get a new connection in, it is obvious that something went wrong
> (or at least the other side thinks so), so you redo the list update.
> so renewing a connection is okay.
> [framing]
> | > it allows for
> | > [x] unquoted and unparsed binary file transfers
> | > [x] routing and multicasting content without losing time parsing it
> | > [x] probably more
> |
> | My guess would probably be that you are alluding to SIP, but that's 
> hm yeah i guess SIP qualifies too. i was obviously talking of PSYC.
> | pretty much irrelevant, part of the whole point of XMPP is that it is a 
> | streaming XML protocol, binary framing doesn't really fit in with this, 
> XML was the hype thing in 1998, so it was used for IM.
> it's not like a protocol was invented to do XML, then applied to IM.
> but i know you didn't mean that. it just sounded that way.  :)


This message has optimized support for formating.
Please choose green font and black background so it looks like it should.

Michal "vorner" Vaner
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: <http://mail.jabber.org/pipermail/standards/attachments/20060601/6d3d0acc/attachment.sig>

More information about the Standards mailing list