[149085] in North American Network Operators' Group

home help back first fref pref prev next nref lref last post

Re: pontification bloat (was 10GE TOR port buffers (was Re: 10G

daemon@ATHENA.MIT.EDU (Leo Bicknell)
Fri Jan 27 20:56:34 2012

Date: Fri, 27 Jan 2012 17:56:01 -0800
From: Leo Bicknell <bicknell@ufp.org>
To: "North American Network Operators' Group" <nanog@nanog.org>
Mail-Followup-To: North American Network Operators' Group <nanog@nanog.org>
In-Reply-To: <m2pqe47guv.wl%randy@psg.com>
Errors-To: nanog-bounces+nanog.discuss=bloom-picayune.mit.edu@nanog.org


--Q68bSM7Ycu6FN28Q
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

In a message written on Sat, Jan 28, 2012 at 10:31:20AM +0900, Randy Bush w=
rote:
> when a line card is designed to buffer the b*d of a trans-pac 40g, the
> oddities on an intra-pop link have been observed to spike to multiple
> seconds.

Please turn that buffer down.

It's bad enough to take a 100ms hop across the pacific.  It's far
worse when there is +0-100ms of additional buffer. :(

Unless that 40G has like 4x10Gbps TCP flows on it you don't need
b*d of buffer.  I bet many of your other problems go away.  10ms
of buffer would be a good number.

> so, do you have wred enabled anywhere?  who actually has it enabled?
>=20
> (embarrassed to say, but to set an honest example, i do not believe iij
> does)

My current employment offers few places where it is appropriate.
However, cribbing from a previous ob where I rolled it out network wide:

policy-map atm-queueing-out
  class class-default
   fair-queue
   random-detect
   random-detect precedence 0   10    40    10
   random-detect precedence 1   13    40    10
   random-detect precedence 2   16    40    10
   random-detect precedence 3   19    40    10
   random-detect precedence 4   22    40    10
   random-detect precedence 5   25    40    10
   random-detect precedence 6   28    40    10
   random-detect precedence 7   31    40    10

int atm1/0.1
 pvc 1/105
  vbr-nrt 6000 5000 600
  tx-ring-limit 4
  service-policy output atm-queueing-out

Those packet thresholds were computed as the best balance for
6-20MMbps PVC's on an ATM interface.  Also notice that the hardware
tx-ring-limit had to be reduced in order to make it effective.
There is a hardware buffer that is way too big below the software
wred on the platforms in question (7206XVR's).

Here's one to wrap your head around.  You have an ATM OC-3, it has on it
40 PVC's.  Each PVC has a WRED config on it allowing up to 40 packets to
be buffered.  Some genius in security fires off a network scanning tool
across all 40 sites.

Yes, you now have 40*40, or 1600 packets of buffer on your single
physical port.  :(  If you work with Frame or ATM, or even dot1q=20
vlans you have to be careful of buffering per-subinterface.  It can
quickly get absurd.

--=20
       Leo Bicknell - bicknell@ufp.org - CCIE 3440
        PGP keys at http://www.ufp.org/~bicknell/

--Q68bSM7Ycu6FN28Q
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.18 (FreeBSD)

iQIVAwUBTyNVsbN3O8aJIdTMAQI3ZQ/9FVxq4aDgUgSLDEUD21kRg6D97G6VAlNp
ucNVUhGeFviQrI4+a/uitgEBOez6FhDYA/MJTqUn0ipwXfBrCoWflMeTUUeXcX5I
MSuA1pFj2W5BvPLisVJk0rbtnyzb6V9qEiSKARUJttFa0va94sn+2ybeyEldWwMs
FbVhdue2vO2rZ1kpCYtAUgOkWelKEDJLTYo5NH5pqrZCFzc2F8D6vazQVhra2UM1
vGng02P/U2JtRC+/PfTejIFdoPb/mfhtiM5UYa9oi1YBg8rdKgXLqv2b4Xrz66Ya
Hj16HfYAehj/Zk1/jayTpux5+qA98tcIZbZ0FjS2BZINoetGbEsVvy/K7pIaaSte
MLydefzYvW5j6gYw9AaIUqGf6vEgqhtLEIVQ2fcYlSAuZNWrfZX5oalJeyobyQIi
Oh7Yw0odv1Gf9QV7deBqsmKsCtGF7jW3izvvkioGhwwdBGwajZnxdUw+kqeGpJcq
1daDnfUy7/Hs/djsYtN7UzXyk7AyWQV13eEmxErPkhSxWVw947x7sQ/Y24R3dYDD
80gE0tI5i7Jez1aEmYzaEqRm6eQEBWZ3CvyT4GsetNJ6EScPkEIIi+1CapjdXESo
5o/gkeVmuOa7UzlfEg1DQP/r1gTC29DmaCFZWudRNpvTDcpZZHCVwU8RMz21LKmL
e7JYGVFO0OY=
=qULT
-----END PGP SIGNATURE-----

--Q68bSM7Ycu6FN28Q--


home help back first fref pref prev next nref lref last post