Jumbo frame Question
Joel Jaeggli
joelja at bogus.com
Fri Nov 26 21:29:45 UTC 2010
10/100 switches and NICs pretty much universally do not support jumbos.
Joel's widget number 2
On Nov 26, 2010, at 8:02, Brandon Kim <brandon.kim at brandontek.com> wrote:
>
> Where would the world be if we weren't stuck at 1500 MTU? I've always kinda thought, what if that was larger
> from the start....
>
> We keep getting faster switchports, but the MTU is still 1500 MTU! I'm sure someone has done some testing with
> a 10/100 switch with jumbo frames enables versus a 10/100/1000 switch using regular 1500 MTU and compared
> the performance.....
>
>
>
>
>> Subject: RE: Jumbo frame Question
>> Date: Thu, 25 Nov 2010 21:14:02 -0800
>> From: gbonser at seven.com
>> To: harris.hui at hk1.ibm.com; nanog at nanog.org
>>
>>> Hi
>>>
>>> Does anyone have experience on design / implementing the Jumbo frame
>>> enabled network?
>>>
>>> I am working on a project to better utilize a fiber link across east
>>> coast
>>> and west coast with the Juniper devices.
>>>
>>> Based on the default TCP windows in Linux / Windows and the latency
>>> between
>>> east coast and west coast (~80ms) and the default MTU size 1500, the
>>> maximum throughput of a single TCP session is around ~3Mbps but it is
>>> too
>>> slow for us to backing-up the huge amount of data across 2 sites.
>>
>> There are a lot of stack tweaks you can make but the real answer is
>> larger MTU sizes in addition to those tweaks. Our network is completely
>> 9000 MTU internally. We don't deploy any servers anymore with MTU 1500.
>> MTU 1500 is just plain stupid with any network >100mb ethernet.
>>
>>> The following is the topology that we are using right now.
>>>
>>> Host A NIC (MTU 9000) <--- GigLAN ---> (MTU 9216) Juniper EX4200 (MTU
>>> 9216)
>>> <---GigLAN ---> (MTU 9018) J-6350 cluster A (MTU 9018) <--- fiber link
>>> across site ---> (MTU 9018) J-6350 cluster B (MTU 9018) <--- GigLAN
>> ---
>>>>
>>> (MTU 9216) Juniper EX4200 (MTU 9216) <---GigLAN ---> (MTU 9000) NIC -
>>> Host
>>> B
>>>
>>> I was trying to test the connectivity from Host A to the J-6350
>> cluster
>>> A
>>> by using ICMP-Ping with size 8000 and DF bit set but it was failed to
>>> ping.
>>>
>>> Does anyone have experience on it? please advise.
>>>
>>> Thanks :-)
>>
>> You might have some transport in the path (SONET?) that can't send 8000.
>> I would try starting at 3000 and working up to find where your limit is.
>>
>> Your description of "fiber link across site" is vague. Who is the
>> vendor, what kind of service?
>>
>>
>
>
More information about the NANOG
mailing list