[81454] in North American Network Operators' Group

home help back first fref pref prev next nref lref last post

Re: Calculating Jitter

daemon@ATHENA.MIT.EDU (Eric Frazier)
Fri Jun 10 13:05:43 2005

Date: Fri, 10 Jun 2005 10:07:06 -0700
To: Fred Baker <fred@cisco.com>, Jeff Murri <jeff@nessoft.com>
From: Eric Frazier <eric@dmcontact.com>
Cc: nanog@nanog.org
In-Reply-To: <f535d6ef5a161fc74ac0ba03f4284aed@cisco.com>
Errors-To: owner-nanog@merit.edu


At 09:56 AM 6/10/2005, Fred Baker wrote:

>you saw marshall's comment. If you're interested in a moving average, he's 
>pretty close.
>
>If I understood your question, though, you simply wanted to quantify the 
>jitter in a set of samples. I should think there are two obvious 
>definitions there.
>
>A statistician would look, I should think, at the variance of the set. 
>Reaching for my CRC book of standard math formulae and tables, it defines 
>the variance as the square of the standard deviation of the set, which is 
>to say

That is one thing I have never understood, if you can pretty much just look 
at a standard dev and see it is high, and yeah that means your numbers are 
flopping all over the place, then what good is the square of it? Does it 
just make graphing better in some way?

Thanks,

Eric 


home help back first fref pref prev next nref lref last post