[dpdk-users] DPDK QOS scheduler priority starvation issue

Dumitrescu, Cristian cristian.dumitrescu at intel.com
Wed Apr 27 12:34:10 CEST 2016


Hi Ashok,

I am not sure I understand what the issue is, as you do not provide the output rates. You mention pipe is configured with 400m (assuming 400 million credits), with pipe TC1 configured with 40m and TC3 with 400m, while input traffic is 100m for pipe TC1 and 500m for pipe TC3; to me, the output (i.e. scheduled and TX-ed) traffic should be close to 40m (40 million bytes, including Ethernet framing overhead of 20 bytes per frame) for pipe TC1 and close to 360m for pipe TC3.

One a pipe is selected for scheduling, we only read pipe and pipe TC credits once (when pipe is selected, which is also the moment the pipe and pipe TC credits are also updated), so we do not re-evaluate pipe credits again until the next time pipe is selected.

The hierarchical scheduler is only accurate when many (hundreds, thousands) pipes are active; looks like you are only using a single pipe for your test, please retry with more pipes active.

Regards,
Cristian


Hello Cristian,

We are running into an issue in our DPDK scheduler issue we notice that lower priority traffic (TC3) starves higher priority traffic if the packet size of the lower priority traffic is smaller than the packet size of the higher priority traffic.
If the packet size of the lower priority traffic (TC3) is same or larger than the higher priority (TC1 or TC2), we dont see the problem.
Using q-index within the TC:

-Q0 (TC1), 1024 byte packets, 40m configured and 100m sent
-Q2 (TC3), 128/256 byte packets, 400m configured and 500m sent
-Only one pipe active ( configured for 400m)
-Only on subport configured.
-TC period is set to 10msecs
In this scenario TC3 carries most of the traffic (400m).
We are using older version of DPDK, is this something addressed in the later releases?
Appreciate any hint,
thanks
Ashok




More information about the users mailing list