[dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API

Shahaf Shuler shahafs at mellanox.com
Tue Sep 12 08:35:16 CEST 2017


Tuesday, September 12, 2017 8:52 AM, Jerin Jacob:
> > I understand the use case, and the fact those flags improve the
> performance on low-end ARM CPUs.
> > IMO those flags cannot be on queue/port level. They must be global.
> 
> Where should we have it as global(in terms of API)?
> And why it can not be at port level?

Because I don't think there is a use-case that application would want to have recounting on one port and not on the other. It is either application clone/not clone mbufs. 
Same about the multi mempool. It is either application have it or not. 

If there is a strong use-case for application to say on port X it clones mbufs and and port Y it don't then maybe this is enough to have it per-port.
We can go even further - why not to have guarantee per queue? it is possible if application is willing to manage. 

Again those are not offloads, therefore if we expose those this should on different location the offloads field on eth conf. 

> 
> >
> > Even though the use-case is generic the nicvf PMD is the only one which do
> such optimization.
> > So am suggesting again - why not expose it as a PMD specific parameter?
> 
> Why to make it as PMD specific? if application can express it though
> normative DPDK APIs.
> 
> >
> > - The application can express it wants such optimization.
> > - It is global
> >
> > Currently it does not seems there is high demand for such flags from other
> PMDs. If such demand will raise, we can discuss again on how to expose it
> properly.
> 
> It is not PMD specific. It is all about where it runs? it will applicable for any
> PMD that runs low end hardwares where it need SW based Tx buffer
> recycling(The NPU is different story as it has HW assisted mempool
> manager).

Maybe, but I don't see other PMD which use those flags. Do you aware to any plans to add such optimizations?
You are pushing for generic API which is currently used only by a single entity. 

> What we are loosing by running DPDK effectively on low end hardware with
> such "on demand" runtime configuration though DPDK normative API.

Complexity of APIs for applications. More structs on ethdev, more API definitions, more field to be configured by application, all valid for a single PMD. 
For the rest of the PMDs, those fields are currently don't-care. 

> 
> 
> >
> >
> >
> >
> >


More information about the dev mailing list