Updated Ookla Speedtest Server Requirements

alvin nanog nanogml at Mail.DDoS-Mitigator.net
Mon Nov 9 23:44:02 UTC 2015


On 11/09/15 at 05:35pm, Josh Luthman wrote:
> You can't get SFP+ PCI cards anyways.  I don't think you can very easily
> get boards with PCI on them, either.

lots of vendors w/ PCIe 10gigE cards w/ copper or SFP+ interface .. 

very few ( almost none ) for explicitly PCI w/ 10gigE and more importantly

i usually get intel chipset based SFP+ PCI-e x8 based dual-10gigE cards
similarly, for dual 10gigE copper for testing on gigE infrastructcure

not many "tyan/supermicro" motherboards with dual-gigE SFP+ connectors
( our primary requirement is dual-nic 10gigE ports squeezed into 1U chassis )

yes, it could be slow ... and most likely not even run at "bit speeds"
quoted by marketing, but not everybody goes around measuring bandwidth speeds

similarly, not much marketing anymore for 3.xGhz or 4.xGhz or 6.xGhz Ghz 
"cpu speeds" and lately, past decade, how many cores can you squeeze into 1sq inch
which is trivially easy to count and measure cores and it's performance :-)

PCIe-3.x spec'd at 8Gbit/s	# x8 would easily give you 40Gbit/s
PCIe-2.x spec'd at 4Gbit/s
PCIe-1.x spec'd at 2Gbit/s

PCIe x1 == 2Gbit/s ( per lane )
PCIe x4 == "combined" 8Gbit/s 
PCIe x8 == "combined" 16Gbit/s 	( common PCIe cards )
PCIe x16 == "combined" 32Gbit/s
PCIe x32 == "combined" 64Gbit/s

> I expect he was just typing it out and left the "E" =)

just being brain-dead lazy w/ assumptions and PCI vs PCIe does make a
big difference with network thruput

magic pixie dust
alvin
# Custom 1U chassis w/ zero drives and up to 8 drives


----- End forwarded message -----



More information about the NANOG mailing list