DPDK and energy efficiency

Eric Kuhnke eric.kuhnke at gmail.com
Fri Mar 5 21:40:34 UTC 2021


For comparison purposes, I'm curious about the difference in wattage
results between:

a) Your R640 at 420W running DPDK

b) The same R640 hardware temporarily booted from a Ubuntu server live USB,
in which some common CPU stress and memory disk/IO benchmarks are being run
to intentionally load the system to 100% to characterize its absolute
maximum AC load wattage.

https://packages.debian.org/search?keywords=stress

https://packages.debian.org/search?keywords=stress-ng

What's the delta between the 420W and absolute maximum load the server is
capable of pulling on the 208VAC side?

https://manpages.ubuntu.com/manpages/artful/man1/stress-ng.1.html


One possible factor is whether ESXI is configured to pass the pci-e devices
directly through to the guest VM, or if there is any abstraction in
between. For non-ESXI stuff, in the world of Xen or KVM there's many
different ways that a guest domU can access a dom0's network devices, some
of which can have impact on overall steady-state wattage consumed by the
system.

If the greatest possible efficiency is desired for a number of 1U things,
one thing to look at would be something similar to the open compute
platform single centralized AC to DC power units, and servers that don't
each have their own discrete 110-240VAC single or dual power supplies. In
terms of cubic meters of air moved per hour vs wattage, the fans found in
1U servers are really quite inefficient. As a randomly chosen example of
12VDC 40mm (1U server height) fan:

https://www.shoppui.com/documents/9HV0412P3K001.pdf

If you have a single 12.0VDC fan that's a maximum load of 1.52A, that's a
possible load of up to 18.24W for just *one* 40mm height fan. And your
typical high speed dual socket 1U server may have up to eight or ten of
those, in the typical front to back wind tunnel configuration. Normally
fans won't be running at full speed, so each one won't be a 18W load, but
more like 10-12W per fan is totally normal. Plus two at least two more fans
in both hot swap power supplies. Under heavy load I would not be surprised
at all to say that 80W to 90W of your R640's total 420W load is
ventilation.

In a situation where you're running out of power before you run out of rack
space, look at some 1.5U and 2U high chassist that use 60mm height fans,
which are much more efficient in ratio of air moved per time period vs
watts.





On Fri, Mar 5, 2021 at 12:44 PM Brian Knight via NANOG <nanog at nanog.org>
wrote:

> On 2021-03-05 12:22, Etienne-Victor Depasquale wrote:
>
> > Sure, here goes:
> >
> > https://www.surveymonkey.com/results/SM-BJ9FCT6K9/
>
> Thanks for sharing these results.  We run DPDK workloads (Cisco nee
> Viptela vEdge Cloud) on ESXI.  Fwiw, a quick survey of a few of our Dell
> R640s running mostly vEdge workloads shows the PS output wattage is
> about 60% higher than a non-vEdge workload: 420W vs 260W.  PS input
> amperage is 2.0A at 208V vs 1.4A, a 42% difference.  Processor type is Xeon
> 6152.  Stats obtained from the iDRAC lights-out management module.
>
> vEdge does not do any limiting of polling by default, and afaik the
> software has no support for any kind of limiting.  It will poll the
> network driver on every core assigned to the VM for max performance,
> except for one core which is assigned to the control plane.
>
> I'm usually more concerned about the lack of available CPU cores.  The
> CPU usage forces us not to oversubscribe the VM hosts, which means we
> must provision vEdges less densely and buy more gear sooner.  Plus, the
> increased power demand means we can fit about 12 vEdge servers per
> cabinet instead of 17.  (Power service is 30A 208V, maximum of 80%
> usage.)
>
> OTOH, I face far fewer questions about vEdge Cloud performance problems
> than I do on other virtual platforms.
>
>
> > Cheers,
> >
> > Etienne
>
>
> Thanks again,
>
> -Brian
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nanog.org/pipermail/nanog/attachments/20210305/3838992d/attachment.html>


More information about the NANOG mailing list