[dpdk-dev] [RFC PATCH 0/7] RFC:EventDev OPDL PMD

Jerin Jacob jerin.jacob at caviumnetworks.com
Thu Nov 30 18:41:59 CET 2017


-----Original Message-----
> Date: Wed, 29 Nov 2017 17:15:12 +0000
> From: "Ma, Liang" <liang.j.ma at intel.com>
> To: Jerin Jacob <jerin.jacob at caviumnetworks.com>
> CC: "dev at dpdk.org" <dev at dpdk.org>, "Van Haaren, Harry"
>  <harry.van.haaren at intel.com>, "Richardson, Bruce"
>  <bruce.richardson at intel.com>, "Jain, Deepak K" <deepak.k.jain at intel.com>,
>  "Mccarthy, Peter" <peter.mccarthy at intel.com>
> Subject: Re: [RFC PATCH 0/7] RFC:EventDev OPDL PMD
> User-Agent: Mutt/1.9.1 (2017-09-22)
> 
> On 29 Nov 04:56, Jerin Jacob wrote:
> > -----Original Message-----
> > > Date: Wed, 29 Nov 2017 12:19:54 +0000
> > > From: "Ma, Liang" <liang.j.ma at intel.com>
> > > To: Jerin Jacob <jerin.jacob at caviumnetworks.com>
> > > CC: dev at dpdk.org, "Van Haaren, Harry" <harry.van.haaren at intel.com>,
> > >  "Richardson, Bruce" <bruce.richardson at intel.com>, "Jain, Deepak K"
> > >  <deepak.k.jain at intel.com>, "Mccarthy, Peter" <peter.mccarthy at intel.com>
> > > Subject: Re: [RFC PATCH 0/7] RFC:EventDev OPDL PMD
> > > User-Agent: Mutt/1.5.20 (2009-06-14)
> > > 
> > > Hi Jerin,
> > >    Many thanks for your comments. Please check my comment below. 
> > > 
> > > On 25 Nov 02:25, Jerin Jacob wrote:
> > > > -----Original Message-----
> > > > > Date: Fri, 24 Nov 2017 11:23:45 +0000
> > > > > From: liang.j.ma at intel.com
> > > > > To: jerin.jacob at caviumnetworks.com
> > > > > CC: dev at dpdk.org, harry.van.haaren at intel.com, bruce.richardson at intel.com,
> > > > >  deepak.k.jain at intel.com, john.geary at intel.com
> > > > > Subject: [RFC PATCH 0/7] RFC:EventDev OPDL PMD
> > > > > X-Mailer: git-send-email 2.7.5
> > > > > 
> > > > > From: Liang Ma <liang.j.ma at intel.com>
> > > > 
> > > > 
> > > > 
> > > > # How does application knows this PMD has above limitations?
> > > > 
> > > > I think, We need to add more capability RTE_EVENT_DEV_CAP_*
> > > > to depict these constraints. On the same note, I believe this
> > > > PMD is "radically" different than other SW/HW PMD then anyway
> > > > we cannot write the portable application using this PMD. So there
> > > > is no point in abstracting it as eventdev PMD. Could you please
> > > > work on the new capabilities are required to enable this PMD.
> > > > If it needs more capability flags to express this PMD capability,
> > > > we might have a different library for this as it defects the
> > > > purpose of portable eventdev applications.
> > > >
> > > Agree with improve capability information with add more details with 
> > > RTE_EVENT_DEV_CAP_*. While the OPDL is designed around a different 
> > 
> > Please submit patches required for new caps required for this PMD to
> > depict the constraints. That is the only way application can know 
> > the constraints for the given PMD.
> > 
> I will work on capability issue and submit V2 patches when that's ready.

OK

> > > load-balancing architecture, that of load-balancing across pipeline 
> > > stages where a consumer is only working on a single stage, this does not 
> > > necessarily mean that it is completely incompatible with other eventdev 
> > > implementations. Although, it is true that an application written to use 
> > > one of the existing eventdevs probably won't work nicely with the OPDL
> > > eventdev, the converse situation should work ok. That is, an application
> > > written as a pipeline using the OPDL eventdev for load balancing should 
> > > work without changes with the generic SW implementation, and there should 
> > > be no reason why it should not also work with other HW implementations 
> > > in DPDK too. 
> > > OPDL PMD implement a subset functionality of eventdev API. I demonstrate 
> > > OPDL on this year PRC DPDK summit,  got some early feedback from potential
> > > users. Most of them would like to use that under existing API(aka eventdev) 
> > > rather than another new API/lib. That let potential user easier to swap to 
> > > exist SW/HW eventdev PMD.
> > 
> > Perfect. Lets have one application then so it will it make easy to swap
> > SW/HW eventdev PMD.
> > 
> > > 
> > > > # We should not add yet another "PMD" specific example application
> > > > in example area like "examples/eventdev_pipeline_opdl_pmd". We are
> > > > working on making examples/eventdev/pipeline_sw_pmd to make work
> > > > on both HW and SW.
> > > > 
> > > We would agree here that we don't need a proliferation of example applications.
> > > However this is a different architecture (not a dynamic packet scheduler rather
> > > a static pipeline work distributer), and as such perhaps we should have a 
> > > sample app that demonstrates each contrasting architecture.
> > 
> > I agree. We need sample application. Why not change the exiting
> > examples/eventdev/pipeline_sw_pmd to make it work as we are addressing the
> > pipeling here. Let write the application based on THE USE CASE not
> > specific to PMD. PMD specific application won't scale.
> > 
> I perfer to pending upstream OPDL example code in this patches set. 
> it's better to upstream/merge example code in another track.

OK. Example code can be added later once examples/eventdev/pipeline_sw_pmd cleaned up.
The static pipeline aka OPDL PMD use case can be separate file inside the
example/eventdev_pipeline

> > > 
> > > > # We should not add new PMD specific test cases in
> > > > test/test/test_eventdev_opdl.c area.I think existing PMD specific
> > > > test case can be moved to respective driver area, and it can do 
> > > > the self-test by passing some command line arguments to vdev.
> > > > 
> > > We simply followed the existing test structure here. Would it be confusing to 
> > > have another variant of example test code, is this done anywhere else? 
> > > Also would there be a chance that DTS would miss running tests or not like 
> > > having to run them using a different method. However we would defer to the consensus here.
> > > Could you elaborate on your concerns with having another test file in the test area ?
> > 
> > PMD specific test cases wont scale. It defect the purpose of the common
> > framework. Cryptodev fell into that trap earlier then they fixed it.
> > For DTS case, I think, still it can verified through vdev command line
> > arguments to the new PMD. What do you think?
> > 
> Agree, I would like to intergrate the test code with PMD, but any API is avaiable 
> for self test purpose ? I didn't find existing api support self test. any hints ?

I may not need any special API for that.
I was thinking to invoke the self test with vdev argument, something like
--vdev=event_<your_driver>,selftest=1. On the end of driver probe, you can invoke
the driver specific test case if selftest == 1


> 
> > > 
> > > > # Do you have relative performance number with exiting SW PMD?
> > > > Meaning how much it improves for any specific use case WRT exiting
> > > > SW PMD. That should a metric to define the need for new PMD.
> > > > 
> > > Yes, we definitely has the number. Given the limitation(Ref cover letter), OPDL 
> > > can achieve 3X-5X times schedule rate(on Xeon 2699 v4 platform) compare with the 
> > > standard SW PMD and no need of schedule core. This is the core value of OPDL PMD.
> > > For certain user case, "static pipeline"  "strong order ",  OPDL is very useful 
> > > and efficient and generic to processor arch.
> > 
> > Sounds good.
> > 
> > > 
> > > > # There could be another SW driver from another vendor like ARM.
> > > > So, I think, it is important to define the need for another SW
> > > > PMD and how much limitation/new capabilities it needs to define to
> > > > fit into the eventdev framework,
> > > >
> > > Put a summary here, OPDL is designed for certain user case, performance is increase 
> > > dramatically. Also OPDL can fallback to standard SW PMD seamless.
> > > That definitely fit into the eventdev API
> > > 


More information about the dev mailing list