[132492] in North American Network Operators' Group
RE: Jumbo frame Question
daemon@ATHENA.MIT.EDU (Brandon Kim)
Fri Nov 26 11:02:43 2010
From: Brandon Kim <brandon.kim@brandontek.com>
To: <gbonser@seven.com>, <harris.hui@hk1.ibm.com>, nanog group
<nanog@nanog.org>
Date: Fri, 26 Nov 2010 11:02:39 -0500
In-Reply-To: <5A6D953473350C4B9995546AFE9939EE0B14CB69@RWC-EX1.corp.seven.com>
Errors-To: nanog-bounces+nanog.discuss=bloom-picayune.mit.edu@nanog.org
Where would the world be if we weren't stuck at 1500 MTU? I've always kinda=
thought=2C what if that was larger=20
from the start....
We keep getting faster switchports=2C but the MTU is still 1500 MTU! I'm su=
re someone has done some testing with
a 10/100 switch with jumbo frames enables versus a 10/100/1000 switch using=
regular 1500 MTU and compared
the performance.....
> Subject: RE: Jumbo frame Question
> Date: Thu=2C 25 Nov 2010 21:14:02 -0800
> From: gbonser@seven.com
> To: harris.hui@hk1.ibm.com=3B nanog@nanog.org
>=20
> > Hi
> >=20
> > Does anyone have experience on design / implementing the Jumbo frame
> > enabled network?
> >=20
> > I am working on a project to better utilize a fiber link across east
> > coast
> > and west coast with the Juniper devices.
> >=20
> > Based on the default TCP windows in Linux / Windows and the latency
> > between
> > east coast and west coast (~80ms) and the default MTU size 1500=2C the
> > maximum throughput of a single TCP session is around ~3Mbps but it is
> > too
> > slow for us to backing-up the huge amount of data across 2 sites.
>=20
> There are a lot of stack tweaks you can make but the real answer is
> larger MTU sizes in addition to those tweaks. Our network is completely
> 9000 MTU internally. We don't deploy any servers anymore with MTU 1500.
> MTU 1500 is just plain stupid with any network >100mb ethernet.
>=20
> > The following is the topology that we are using right now.
> >=20
> > Host A NIC (MTU 9000) <--- GigLAN ---> (MTU 9216) Juniper EX4200 (MTU
> > 9216)
> > <---GigLAN ---> (MTU 9018) J-6350 cluster A (MTU 9018) <--- fiber link
> > across site ---> (MTU 9018) J-6350 cluster B (MTU 9018) <--- GigLAN
> ---
> > >
> > (MTU 9216) Juniper EX4200 (MTU 9216) <---GigLAN ---> (MTU 9000) NIC -
> > Host
> > B
> >=20
> > I was trying to test the connectivity from Host A to the J-6350
> cluster
> > A
> > by using ICMP-Ping with size 8000 and DF bit set but it was failed to
> > ping.
> >=20
> > Does anyone have experience on it? please advise.
> >=20
> > Thanks :-)
>=20
> You might have some transport in the path (SONET?) that can't send 8000.
> I would try starting at 3000 and working up to find where your limit is.
>=20
> Your description of "fiber link across site" is vague. Who is the
> vendor=2C what kind of service? =20
>=20
>=20
=