[132510] in North American Network Operators' Group

home help back first fref pref prev next nref lref last post

Re: Jumbo frame Question

daemon@ATHENA.MIT.EDU (Joel Jaeggli)
Fri Nov 26 16:28:51 2010

In-Reply-To: <BLU158-w4103A99FE4DBC773F21855DC210@phx.gbl>
From: Joel Jaeggli <joelja@bogus.com>
Date: Fri, 26 Nov 2010 13:29:45 -0800
To: Brandon Kim <brandon.kim@brandontek.com>
Cc: nanog group <nanog@nanog.org>
Errors-To: nanog-bounces+nanog.discuss=bloom-picayune.mit.edu@nanog.org

10/100 switches and NICs pretty much universally do not support jumbos.

Joel's widget number 2

On Nov 26, 2010, at 8:02, Brandon Kim <brandon.kim@brandontek.com> wrote:

>=20
> Where would the world be if we weren't stuck at 1500 MTU? I've always kind=
a thought, what if that was larger=20
> from the start....
>=20
> We keep getting faster switchports, but the MTU is still 1500 MTU! I'm sur=
e someone has done some testing with
> a 10/100 switch with jumbo frames enables versus a 10/100/1000 switch usin=
g regular 1500 MTU and compared
> the performance.....
>=20
>=20
>=20
>=20
>> Subject: RE: Jumbo frame Question
>> Date: Thu, 25 Nov 2010 21:14:02 -0800
>> From: gbonser@seven.com
>> To: harris.hui@hk1.ibm.com; nanog@nanog.org
>>=20
>>> Hi
>>>=20
>>> Does anyone have experience on design / implementing the Jumbo frame
>>> enabled network?
>>>=20
>>> I am working on a project to better utilize a fiber link across east
>>> coast
>>> and west coast with the Juniper devices.
>>>=20
>>> Based on the default TCP windows in Linux / Windows and the latency
>>> between
>>> east coast and west coast (~80ms) and the default MTU size 1500, the
>>> maximum throughput of a single TCP session is around ~3Mbps but it is
>>> too
>>> slow for us to backing-up the huge amount of data across 2 sites.
>>=20
>> There are a lot of stack tweaks you can make but the real answer is
>> larger MTU sizes in addition to those tweaks.  Our network is completely
>> 9000 MTU internally. We don't deploy any servers anymore with MTU 1500.
>> MTU 1500 is just plain stupid with any network >100mb ethernet.
>>=20
>>> The following is the topology that we are using right now.
>>>=20
>>> Host A NIC (MTU 9000) <--- GigLAN ---> (MTU 9216) Juniper EX4200 (MTU
>>> 9216)
>>> <---GigLAN ---> (MTU 9018) J-6350 cluster A (MTU 9018) <--- fiber link
>>> across site ---> (MTU 9018) J-6350 cluster B (MTU 9018) <--- GigLAN
>> ---
>>>>=20
>>> (MTU 9216) Juniper EX4200 (MTU 9216) <---GigLAN ---> (MTU 9000) NIC -
>>> Host
>>> B
>>>=20
>>> I was trying to test the connectivity from Host A to the J-6350
>> cluster
>>> A
>>> by using ICMP-Ping with size 8000 and DF bit set but it was failed to
>>> ping.
>>>=20
>>> Does anyone have experience on it? please advise.
>>>=20
>>> Thanks :-)
>>=20
>> You might have some transport in the path (SONET?) that can't send 8000.
>> I would try starting at 3000 and working up to find where your limit is.
>>=20
>> Your description of "fiber link across site" is vague. Who is the
>> vendor, what kind of service? =20
>>=20
>>=20
>                        =20
>=20


home help back first fref pref prev next nref lref last post