[dpdk-dev] [RFC] New packet type query API

Shahaf Shuler shahafs at mellanox.com
Wed Jan 17 15:34:06 CET 2018


Wednesday, January 17, 2018 10:09 AM, Andrew RybchenkoL
> On 01/16/2018 06:55 PM, Adrien Mazarguil wrote:
> > I understand the motivation behind this proposal, however since new
> > ideas must be challenged, I have a few comments:
> >
> > - How about making packet type recognition an optional offload
> configurable
> >    per queue like any other (e.g. DEV_RX_OFFLOAD_PTYPE)? That way the
> extra
> >    processing cost could be avoided for applications that do not care.
> >
> > - Depending on HW, packet type information inside RX descriptors may not
> >    necessarily fit 64-bit, or at least not without transformation. This
> >    transformation would still cause wasted cycles on the PMD side.
> >
> > - In case enable_ptype_direct is enabled, the PMD may not waste CPU
> cycles
> >    but the subsequent look-up with the proposed API would translate to a
> >    higher cost on the application side. As a data plane API, how does this
> >    benefit applications that want to retrieve packet type information?
> >
> > - Without a dedicated mbuf flag, an application cannot tell whether
> enclosed
> >    packet type data is in HW format. Even if present, if port information is
> >    discarded or becomes invalid (e.g. mbuf stored in an application queue
> for
> >    lengthy periods or passed as is to an unrelated application), there is no
> >    way to make sense of the data.
> >
> > In my opinion, mbufs should only contain data fields in a standardized
> > format. Managing packet types like an offload which can be toggled at
> > will seems to be the best compromise. Thoughts?
> 
> +1

Yes.
PTYPE is yet another offload the PMD provides. It should be enabled/disabled in the same way all other offloads are.
Application who are not interested with it, and wants the extra performance should not enable it.  



More information about the dev mailing list