[190724] in North American Network Operators' Group
Re: MTU
daemon@ATHENA.MIT.EDU (Lee)
Fri Jul 22 18:45:55 2016
X-Original-To: nanog@nanog.org
In-Reply-To: <C1414C7D-285D-420E-BD99-5027F5A2EC48@isprime.com>
From: Lee <ler762@gmail.com>
Date: Fri, 22 Jul 2016 18:45:49 -0400
To: Phil Rosenthal <pr@isprime.com>
Cc: nanog@nanog.org
Errors-To: nanog-bounces@nanog.org
On 7/22/16, Phil Rosenthal <pr@isprime.com> wrote:
>
>> On Jul 22, 2016, at 1:37 PM, Grzegorz Janoszka <Grzegorz@Janoszka.pl>
>> wrote:
>> What I noticed a few years ago was that BGP convergence time was faster
>> with higher MTU.
>> Full BGP table load took twice less time on MTU 9192 than on 1500.
>> Of course BGP has to be allowed to use higher MTU.
>>
>> Anyone else observed something similar?
>
> I have read about others experiencing this, and did some testing a few
> months back -- my experience was that for low latency links, there was a
> measurable but not huge difference. For high latency links, with Juniper
> anyway, there was a very negligible difference, because the TCP Window size
> is hard-coded at something small (16384?), so that ends up being the limit
> more than the tcp slow-start issues that MTU helps with.
I think the Cisco default window size is 16KB but you can change it with
ip tcp window-size NNN
Lee
>
> With that said, we run MTU at >9000 on all of our transit links, and all of
> our internal links, with no problems. Make sure to do testing to send pings
> with do-not-fragment at the maximum size configured, and without
> do-not-fragment just slightly larger than the maximum size configured, to
> make sure that there are no mismatches on configuration due to vendor
> differences.
>
> Best Regards,
> -Phil Rosenthal