Calculating Jitter

Jeff Murri jeff at nessoft.com
Fri Jun 10 09:53:09 UTC 2005


I'm hoping here that this post isn't out of line with the scope of the 
NANOG list, of which I've been a long time lurker.  If so, please just 
ignore me. 

We're trying to calculate Jitter of a variable (non-limited) size data 
set.  One Jitter formula that we see cited occasionally (and is in RFC 
1889 - I believe iPerf uses this formula for it's Jitter #'s) looks 
something like this:

J = J+(|D(i-1,i)|-J)/16

The problem with this formula is that it works best on small sample 
sets, and it also favors more recent samples.  As the sample size grows, 
the jitter of early samples seem to get factored down to basic "noise", 
and then aren't really well represented in the overall Jitter number.

We're trying to find a viable formula for showing a general Jitter 
"average" over a period of time.  One possibility here is just to 
iterate all samples like this:

Jsum = Jsum+|D(i-1,i)|

and then calculating the jitter like this:

J = Jsum / (sample count - 1)

The sample count could be anywhere from 2 to 1 million (or more).  This 
formula does seem to represent early sample in the "Jitter" number just 
as strongly as later samples, but seems like it might be a bit simplistic.

Does anyone have any feedback on this alternate way of calculating 
Jitter, or any better ways to do this?

Thanks in advance for any input.

Jeff Murri
Nessoft, LLC
jeff at nessoft.com
www.nessoft.com




More information about the NANOG mailing list