[dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API

Jerin Jacob jerin.jacob at caviumnetworks.com
Tue Sep 12 09:17:37 CEST 2017


-----Original Message-----
> Date: Tue, 12 Sep 2017 06:35:16 +0000
> From: Shahaf Shuler <shahafs at mellanox.com>
> To: Jerin Jacob <jerin.jacob at caviumnetworks.com>
> CC: "Ananyev, Konstantin" <konstantin.ananyev at intel.com>, Stephen Hemminger
>  <stephen at networkplumber.org>, Thomas Monjalon <thomas at monjalon.net>,
>  "dev at dpdk.org" <dev at dpdk.org>, "Zhang, Helin" <helin.zhang at intel.com>,
>  "Wu, Jingjing" <jingjing.wu at intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads
>  API
> 
> Tuesday, September 12, 2017 8:52 AM, Jerin Jacob:
> > > I understand the use case, and the fact those flags improve the
> > performance on low-end ARM CPUs.
> > > IMO those flags cannot be on queue/port level. They must be global.
> > 
> > Where should we have it as global(in terms of API)?
> > And why it can not be at port level?
> 
> Because I don't think there is a use-case that application would want to have recounting on one port and not on the other. It is either application clone/not clone mbufs. 
> Same about the multi mempool. It is either application have it or not. 

Why not? If a port is given to data plane and another port to control
plane. It can have different characteristics.

Making it port level, we can achieve the global use case as well. but not
another way around.

MULTISEG flag also has the same attribute. But some reason you are OK to
include that in flags.

> 
> If there is a strong use-case for application to say on port X it clones mbufs and and port Y it don't then maybe this is enough to have it per-port.
> We can go even further - why not to have guarantee per queue? it is possible if application is willing to manage. 
> 
> Again those are not offloads, therefore if we expose those this should on different location the offloads field on eth conf. 

What is the definition of offload? It is something we can offload to HW.
If so, then, reference count we can offload to HW with external HW pool
manager which DPDK has support now.

> 
> > 
> > >
> > > Even though the use-case is generic the nicvf PMD is the only one which do
> > such optimization.
> > > So am suggesting again - why not expose it as a PMD specific parameter?
> > 
> > Why to make it as PMD specific? if application can express it though
> > normative DPDK APIs.
> > 
> > >
> > > - The application can express it wants such optimization.
> > > - It is global
> > >
> > > Currently it does not seems there is high demand for such flags from other
> > PMDs. If such demand will raise, we can discuss again on how to expose it
> > properly.
> > 
> > It is not PMD specific. It is all about where it runs? it will applicable for any
> > PMD that runs low end hardwares where it need SW based Tx buffer
> > recycling(The NPU is different story as it has HW assisted mempool
> > manager).
> 
> Maybe, but I don't see other PMD which use those flags. Do you aware to any plans to add such optimizations?

Sorry. I can't comment on another vendor PMD roadmap.

> You are pushing for generic API which is currently used only by a single entity. 

You are removing a existing generic flag.

> 
> > What we are loosing by running DPDK effectively on low end hardware with
> > such "on demand" runtime configuration though DPDK normative API.
> 
> Complexity of APIs for applications. More structs on ethdev, more API definitions, more field to be configured by application, all valid for a single PMD. 
> For the rest of the PMDs, those fields are currently don't-care. 

I don't understand the application complexly port. It just configuration at
port level. And it is at application will, it can choose to run in any mode.
BTW, It is all boils down to features and performance/watt.
IMO, everything should be runtime configurable.

> 
> > 
> > 
> > >
> > >
> > >
> > >
> > >


More information about the dev mailing list