[dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API

Shahaf Shuler shahafs at mellanox.com
Tue Sep 12 08:59:07 CEST 2017


September 12, 2017 9:43 AM, Andrew Rybchenko:

I think port level is the right place for these flags. These flags define which
transmit and transmit cleanup callbacks could be used. These functions are
specified on port level now. However, I see no good reasons to change it.

The Tx queue flags are not currently per-port  rather per-queue. The flags are provided as an input for tx_queue_setup.
Even though application and example in dpdk tree use identical flags for all queues it doesn’t mean application is not allowed to do otherwise.


It will complicate the possibility to make transmit and transmit cleanup callback
per queue (not per port as now).
All three (no-multi-seg, no-multi-mempool, no-reference-counter) are from
one group and should go together.


Even though the use-case is generic the nicvf PMD is the only one which do such optimization.

So am suggesting again - why not expose it as a PMD specific parameter?



Why to make it as PMD specific? if application can express it though

normative DPDK APIs.





- The application can express it wants such optimization.

- It is global



Currently it does not seems there is high demand for such flags from other PMDs. If such demand will raise, we can discuss again on how to expose it properly.



It is not PMD specific. It is all about where it runs? it will

applicable for any PMD that runs low end hardwares where it need SW

based Tx buffer recycling(The NPU is different story as it has HW

assisted mempool manager).

What we are loosing by running DPDK effectively on low end hardware

with such "on demand" runtime configuration though DPDK normative API.

+1 and it improves performance on amd64 as well, definitely less than 24%,
but noticeable. If application architecture meets these conditions, why don't
allow it use the advantage and run faster.


More information about the dev mailing list