DPDK and energy efficiency
Etienne-Victor Depasquale
edepa at ieee.org
Tue Feb 23 22:43:02 UTC 2021
>
> This comes from OVS code and shows OVS thread spinning, not DPDK PMD.
> Blame the OVS application for not using e.g. _mm_pause() and burning
> the CPU like crazy.
>
OK, I'm citing a bit more from the same reference:
*"By tracing back to the function’s caller *
*in the PMD thread main(void *f_), *
we found that the thread kept spinning on the following code block:
for ( ; ; ) {
for ( i = 0; i < poll_cnt; i ++) {
dp_netdev_process_rxq_port (pmd, list[i].port, poll_list[i].rx) ;
}
}
This indicates that the [PMD] thread was continuously
monitoring and executing the receiving data path."
Cheers,
Etienne
On Tue, Feb 23, 2021 at 10:33 PM Pawel Malachowski <
pawmal-nanog at freebsd.lublin.pl> wrote:
> > > No, it is not PMD that runs the processor in a polling loop.
> > > It is the application itself, thay may or may not busy loop,
> > > depending on application programmers choice.
> >
> > From one of my earlier references [2]:
> >
> > "we found that a poll mode driver (PMD)
> > thread accounted for approximately 99.7 percent
> > CPU occupancy (a full core utilization)."
> >
> > And further on:
> >
> > "we found that the thread kept spinning on the following code block:
> >
> > *for ( ; ; ) {for ( i = 0; i < poll_cnt; i ++)
> {dp_netdev_process_rxq_port
> > (pmd, list[i].port, poll_list[i].rx) ;}}*
> > This indicates that the thread was continuously
> > monitoring and executing the receiving data path."
>
> This comes from OVS code and shows OVS thread spinning, not DPDK PMD.
> Blame the OVS application for not using e.g. _mm_pause() and burning
> the CPU like crazy.
>
>
> For comparison, take a look at top+i7z output from DPDK-based 100G DDoS
> scrubber currently lifting some low traffic using cores 1-13 on 16 core
> host. It uses naive DPDK::rte_pause() throttling to enter C1.
>
> Tasks: 342 total, 1 running, 195 sleeping, 0 stopped, 0 zombie
> %Cpu(s): 6.6 us, 0.6 sy, 0.0 ni, 89.7 id, 3.1 wa, 0.0 hi, 0.0 si,
> 0.0 st
>
> Core [core-id] :Actual Freq (Mult.) C0% Halt(C1)% C3 %
> C6 % Temp VCore
> Core 1 [0]: 1467.73 (14.68x) 2.15 5.35 1
> 92.3 43 0.6724
> Core 2 [1]: 1201.09 (12.01x) 11.7 93.9 0
> 0 39 0.6575
> Core 3 [2]: 1200.06 (12.00x) 11.8 93.8 0
> 0 42 0.6543
> Core 4 [3]: 1200.14 (12.00x) 11.8 93.8 0
> 0 41 0.6549
> Core 5 [4]: 1200.10 (12.00x) 11.8 93.8 0
> 0 41 0.6526
> Core 6 [5]: 1200.12 (12.00x) 11.8 93.8 0
> 0 40 0.6559
> Core 7 [6]: 1201.01 (12.01x) 11.8 93.8 0
> 0 41 0.6559
> Core 8 [7]: 1201.02 (12.01x) 11.8 93.8 0
> 0 43 0.6525
> Core 9 [8]: 1201.00 (12.01x) 11.8 93.8 0
> 0 41 0.6857
> Core 10 [9]: 1201.04 (12.01x) 11.8 93.8 0
> 0 40 0.6541
> Core 11 [10]: 1201.95 (12.02x) 13.6 92.9 0
> 0 40 0.6558
> Core 12 [11]: 1201.02 (12.01x) 11.8 93.8 0
> 0 42 0.6526
> Core 13 [12]: 1204.97 (12.05x) 17.6 90.8 0
> 0 45 0.6814
> Core 14 [13]: 1248.39 (12.48x) 28.2 84.7 0
> 0 41 0.6855
> Core 15 [14]: 2790.74 (27.91x) 91.9 0 1
> 1 41 0.8885 <-- not PMD
> Core 16 [15]: 1262.29 (12.62x) 13.1 34.9 1.7
> 56.2 43 0.6616
>
> $ dataplanectl stats fcore | grep total
> fcore total idle 393788223887 work 860443658 (0.2%) (forced-idle
> 7458486526622) recv 202201388561 drop 61259353721 (30.3%) limit 269909758
> (0.1%) pass 140606076622 (69.6%) ingress 66048460 (0.0%/0.0%) sent
> 162580376914 (80.4%/100.0%) overflow 0 (0.0%) sampled 628488188/628488188
>
>
>
> --
> Pawel Malachowski
> @pawmal80
>
--
Ing. Etienne-Victor Depasquale
Assistant Lecturer
Department of Communications & Computer Engineering
Faculty of Information & Communication Technology
University of Malta
Web. https://www.um.edu.mt/profile/etiennedepasquale
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nanog.org/pipermail/nanog/attachments/20210223/dcec1590/attachment.html>
More information about the NANOG
mailing list