[RFC] lib/ethdev: introduce table driven APIs

Zhang, Qi Z qi.z.zhang at intel.com
Mon Jun 19 02:22:59 CEST 2023



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk at gmail.com>
> Sent: Friday, June 16, 2023 9:20 AM
> To: Zhang, Qi Z <qi.z.zhang at intel.com>; Dumitrescu, Cristian
> <cristian.dumitrescu at intel.com>
> Cc: Ori Kam <orika at nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL)
> <thomas at monjalon.net>; david.marchand at redhat.com; Richardson, Bruce
> <bruce.richardson at intel.com>; jerinj at marvell.com; ferruh.yigit at amd.com;
> Mcnamara, John <john.mcnamara at intel.com>; Zhang, Helin
> <helin.zhang at intel.com>; techboard at dpdk.org; dev at dpdk.org; Ivan Malov
> <ivan.malov at arknetworks.am>
> Subject: Re: [RFC] lib/ethdev: introduce table driven APIs
> 
> On Thu, Jun 15, 2023 at 7:36 PM Zhang, Qi Z <qi.z.zhang at intel.com> wrote:
> >
> 
> > > > If we assume that the application is not P4-aware, it will consume
> > > > existing
> > > rte_flow API for flow offloading. In this case, all we need to do is
> > > implement it in the PMD, which will be a highly hardware-specific
> > > task. Do you propose generalizing this common part?
> > > >
> > > > On the other hand, if the application is P4-aware, we can assume
> > > > that
> > > there won't be a need for translation between P4 tokens and rte_flow
> > > protocols in the PMD.
> > >
> > > I agree, Translation is BAD. There are two elements to that.
> > > 1)if it is p4 aware application, why bother with DPDK abstraction?
> > > 2)Can we use compiler techniques to avoid the cost of translation if
> > > P4- aware path is needed in DPDK. Rather than creating yet another
> > > library. In this context, that would translate to some of your
> > > compiler and FW work making as generic so that _any_ other rte_flow
> > > based driver can use and improve it.
> >
> >
> > Ok, I would like to gain a better understanding. Below is my current
> understanding:
> >
> > There are no plans to introduce any new API from DPDK. However, your
> proposal suggests the creation of a tool, such as a compiler, which would
> assist in generating a translation layer from P4 table/actions to rte_flow for
> user application like p4 runtime backend that based on DPDK.
> >
> > Could you provide more details about the design? Specifically, I would like
> to know what the input for the compiler is and who is responsible for
> generating that input, as well as the process involved.
> >
> > I apologize if I have not grasped the complete picture, but I would
> appreciate your patience.
> 
> + @Cristian Dumitrescu
> 
> There is already a lot of p4(just based on DPDK lib/pipeline SW, not with any
> HW acceleration) with DPDK. Not sure how much it overlaps, and how clean
> is this to integrate with existing SW or "create new one"?
> I would think, enhancing the current p4-dpdk support by using rte_flow
> backend. That would translate to
> 1) Update https://github.com/p4lang/p4c/tree/main/backends/dpdk to
> understand generic p4 table key token to rte_flow token for spec file
> generation.

OK, I assume that the compiler should have the capability to comprehend the logic of the P4 parser and determine the appropriate mapping of each key field in the P4 table to an rte_flow header. This process should be independent of any specific vendor.

However, the question arises regarding how to handle vendor-specific data, which also can be part of the table / action key and could potentially be mapped to either rte_flow_item_tag or rte_flow_item_metadata. I'm uncertain about how the P4-DPDK compiler can manage this aspect. Perhaps this particular aspect should be addressed by each vendor's individual backend compiler, while we focus on defining the specifications for the output and providing the common components for parser analysis.

> 2) Update https://github.com/p4lang/p4-dpdk-target or introduce common
> library in DPDK to map compiler output (spec file) to rte_flow objects
> invocations.

I'm not quite sure why we need to update the p4-dpdk-target project, as its purpose is to utilize DPDK for building a software pipeline using P4. However, if we do require the introduction of a common library in DPDK, the following questions arise:

1. What will the API of the library look like? Will it still maintain a table-driven interface that is compatible with P4 runtime? What are the key differences compared to the current proposal?
2. During runtime, will the library load the spec file (output of the compiler) and construct a mapping from P4 tables/actions to the rte_flow API? Is my understanding correct?
3. Some hardware vendors already have backend P4 compilers that are capable of generating hints for configuring hardware based on tables/actions. Is it possible to incorporate a "pass-through" mode within this library?

Thanks
Qi



More information about the dev mailing list