[149089] in North American Network Operators' Group
Re: pontification bloat (was 10GE TOR port buffers (was Re: 10G
daemon@ATHENA.MIT.EDU (Leo Bicknell)
Fri Jan 27 21:21:56 2012
Date: Fri, 27 Jan 2012 18:21:05 -0800
From: Leo Bicknell <bicknell@ufp.org>
To: "North American Network Operators' Group" <nanog@nanog.org>
Mail-Followup-To: North American Network Operators' Group <nanog@nanog.org>
In-Reply-To: <m2mx987ffd.wl%randy@psg.com>
Errors-To: nanog-bounces+nanog.discuss=bloom-picayune.mit.edu@nanog.org
--qcHopEYAB45HaUaB
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
In a message written on Sat, Jan 28, 2012 at 11:02:14AM +0900, Randy Bush w=
rote:
> one problem is that we do not have good tools to look at a link and
> suggest parms. how did you derive those?
It's actually simple math, it just can get moderate complex.
Let's say you have a 10Mbps ethernet interface, and you want to set
the queue size (in packets).
10Mbps is ~1250000 bytes/sec.
Now, I pick an arbitrary value, this is where experience comes in.
For this example I'm going to say I want no more than 5ms queuing
latency. 5ms/1000sec/ms * 1250000 =3D 6250 bytes.
I then look at my MTU, we'll go with 1500 here. 6250 / 1500 4.16
packets.
So queueing around 4 full sized packets generates 0-5ms of jitter
on a 10Mbps ethernet, worst case.
How many ms is good? Well, that depends, a lot. However I suspect most
people here have seen enough pings they have some idea what is good and
what isn't.
=46rom there you have to look at if there is a hardware ring buffer under
the software QOS (not on most interfaces, but yes on some), and then
look if the buffer is per-VC (subint, whatever) on an interface with
multiple subinterfaces.
This is as much art as science. My rules of thumb:
- High speed backbone interfaces, 1-3ms of buffer.
- Medium to high speed links inside of a single pop/site, 2-5ms of buffer.
- Low speed access/edge, 5-20ms of buffer.
I have rarely seen an application benefit from more than 20ms of buffer.
Remember, this is per hop. If you take a 20 hop traceroute and each hop
that 20ms of buffer, you would be waiting 400ms if all the buffers were
full! That's even if all 20 hops are in the same building, so the RTT
is 1ms. Imagine how crappy a 1ms RTT would be with random 4/10ths of a
second pauses would be!
Now, here's where it gets non-intuitive. Reducing the buffers will
_increase_ packet drops, which will make your customers _happier_.
It will also generally smooth out sawtooth patterns (caused by
congestion collapse syncronization, everyone fills the buffer at the
same time, backs off at the same time, etc). So your links may go from
spiky between 90-100%, to flatlined at 100%, but your customers will be
happier.
Run the math the other way to see how many ms your current buffer size
allows the router to hold.
--=20
Leo Bicknell - bicknell@ufp.org - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/
--qcHopEYAB45HaUaB
Content-Type: application/pgp-signature
Content-Disposition: inline
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.18 (FreeBSD)
iQIVAwUBTyNbkbN3O8aJIdTMAQKSkBAAoRdvSzZR+z4BM3iI6VLbSQ5xSr55mAdV
CrhV3/zP2KG2PHODZtG3fant5lGVKSW1WmVWSNwDI/yEa5JjrzfuBY7zZuL2xOXR
JqQ+NOF15OJhd2VpON2mp6rcvZr32SIEMp/Uxh8yOUmP804VbxO95QkVmQdpcPhR
Zv7SLISjhmxGiHx+5du8faSIgGNX1aSB83+yqMYMxFiBpa/6KQHp9v9xWEbjx/hi
16k+5ISKiosRe+9Roti0GPfmBbfIj4nuKnum3IY7VkWX6zTKLu2i5mCQmeu10yYx
4Bm8Rj3rE/9T+/oBm4Vd2X6RuMSxOP5jhRyFjigUkngBTzrYjC1+So0qI34sCQoI
pQ+ABiZzSPYiwPP9UkZp5Ux0zvtTau9MQp1o9QUsqGdzShR2rAc2GcfX1QMOUrJI
97Fv/kcLxRpdtQRHdOK8xmLnWayQLbm2dj6caM+/1Jf8dQ5I5z1w8Ew61W5x3Yxn
YN1UT98OIjhWZPiuYW2LLKoMB8EnU5Wy4UU06xGlaJ88Y878WJ1twpcNaPAglIZ7
GZ1zLFN7GQvFnXqMz6JzybbWHTmNIR54JMfWRZf8fvKAndLcPKFx0Cfm3pCC3Xc7
ObMbFgwzhdzRqxoHAUQo33aEUkX+ptSmmgVuD462dsOF+FyA1O66owUgolDJc/L/
2Zp0t6Ln5Uo=
=tyU9
-----END PGP SIGNATURE-----
--qcHopEYAB45HaUaB--