[dpdk-dev] [PATCH 2/4] eventdev: implement the northbound APIs

Eads, Gage gage.eads at intel.com
Tue Nov 29 06:46:08 CET 2016



>  -----Original Message-----
>  From: Jerin Jacob [mailto:jerin.jacob at caviumnetworks.com]
>  Sent: Monday, November 28, 2016 9:43 PM
>  To: Eads, Gage <gage.eads at intel.com>
>  Cc: dev at dpdk.org; Richardson, Bruce <bruce.richardson at intel.com>; Van
>  Haaren, Harry <harry.van.haaren at intel.com>; hemant.agrawal at nxp.com
>  Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound APIs
>  
>  On Mon, Nov 28, 2016 at 03:53:08PM +0000, Eads, Gage wrote:
>  > (Bruce's adviced heeded :))
>  >
>  > >  -----Original Message-----
>  > >  From: Jerin Jacob [mailto:jerin.jacob at caviumnetworks.com]
>  > >  Sent: Tuesday, November 22, 2016 5:44 PM
>  > >  To: Eads, Gage <gage.eads at intel.com>
>  > >  Cc: dev at dpdk.org; Richardson, Bruce <bruce.richardson at intel.com>;
>  > > Van  Haaren, Harry <harry.van.haaren at intel.com>;
>  > > hemant.agrawal at nxp.com
>  > >  Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the
>  > > northbound APIs
>  > >
>  > >  On Tue, Nov 22, 2016 at 10:48:32PM +0000, Eads, Gage wrote:
>  > >  >
>  > >  >
>  > >  > >  -----Original Message-----
>  > >  > >  From: Jerin Jacob [mailto:jerin.jacob at caviumnetworks.com]
>  > >  > >  Sent: Tuesday, November 22, 2016 2:00 PM  > >  To: Eads, Gage
>  > > <gage.eads at intel.com>  > >  Cc: dev at dpdk.org; Richardson, Bruce
>  > > <bruce.richardson at intel.com>;  > > Van  Haaren, Harry
>  > > <harry.van.haaren at intel.com>;  > > hemant.agrawal at nxp.com  > >
>  > > Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the  > >
>  > > northbound APIs  > >  > >  On Tue, Nov 22, 2016 at 07:43:03PM +0000,
>  > > Eads, Gage wrote:
>  > >  > >  > >  > >  > > One open issue I noticed is the "typical workflow"
>  > >  > >  > > description starting in  > >  rte_eventdev.h:204 conflicts
>  > > with  > > the  > > centralized software PMD that Harry  > >  posted last
>  week.
>  > >  > >  > > Specifically, that PMD expects a single core to call the
>  > > > >  > > > > schedule function. We could extend the documentation to
>  > > account  > > for  > > this  > >  alternative style of scheduler
>  > > invocation, or  > > discuss  > > ways to make the  software  > >
>  > > PMD work with the  > > documented  > > workflow. I prefer the
>  > > former, but either  way I  >  > > >  think we  > > ought to expose
>  > > the scheduler's expected usage to  > > the user --  > > perhaps  > >  through
>  an RTE_EVENT_DEV_CAP flag?
>  > >  > >  > >  > >  >
>  > >  > >  > >  > >  > I prefer former too, you can propose the
>  > > documentation  > > > > change required  for  > >  software PMD.
>  > >  > >  > >  >
>  > >  > >  > >  > Sure, proposal follows. The "typical workflow" isn't
>  > > the  > > most  > > optimal by  having a conditional in the
>  > > fast-path, of  > > course, but it  > > demonstrates the idea  simply.
>  > >  > >  > >  >
>  > >  > >  > >  > (line 204)
>  > >  > >  > >  >  * An event driven based application has following
>  > > typical  > > > > workflow on  > >  fastpath:
>  > >  > >  > >  >  * \code{.c}
>  > >  > >  > >  >  *      while (1) {
>  > >  > >  > >  >  *
>  > >  > >  > >  >  *              if (dev_info.event_dev_cap &
>  > >  > >  > >  >  *                      RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED)
>  > >  > >  > >  >  *                      rte_event_schedule(dev_id);
>  > >  > >  > >
>  > >  > >  > >  Yes, I like the idea of RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED.
>  > >  > >  > >  It  can be input to application/subsystem to  launch
>  > > separate  > > > > core(s) for schedule functions.
>  > >  > >  > >  But, I think, the "dev_info.event_dev_cap &  > >  > >
>  > > RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED"
>  > >  > >  > >  check can be moved inside the implementation(to make the
>  > > > > better  > > decisions  and  avoiding consuming cycles on HW
>  > > based  schedulers.
>  > >  > >  >
>  > >  > >  > How would this check work? Wouldn't it prevent any core from
>  > > > > running the  software scheduler in the centralized case?
>  > >  > >
>  > >  > >  I guess you may not need RTE_EVENT_DEV_CAP here, instead need
>  > > flag  > > for  device configure here  > >  > >  #define
>  > > RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED (1ULL << 1)  > >  > >  struct
>  > > rte_event_dev_config config;  config.event_dev_cfg =  > >
>  > > RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED;
>  > >  > >  rte_event_dev_configure(.., &config);  > >  > >  on the driver
>  > > side on configure,  > >  if (config.event_dev_cfg &
>  > > RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED)
>  > >  > >  	eventdev->schedule = NULL;
>  > >  > >  else // centralized case
>  > >  > >  	eventdev->schedule = your_centrized_schedule_function;
>  > >  > >
>  > >  > >  Does that work?
>  > >  >
>  > >  > Hm, I fear the API would give users the impression that they can
>  > > select the  scheduling behavior of a given eventdev, when a software
>  > > scheduler is more  likely to be either distributed or centralized -- not both.
>  > >
>  > >  Even if it is capability flag then also it is per "device". Right ?
>  > >  capability flag is more of read only too. Am i missing something here?
>  > >
>  >
>  > Correct, the capability flag I'm envisioning is per-device and read-only.
>  >
>  > >  >
>  > >  > What if we use the capability flag, and define
>  > > rte_event_schedule() as the  scheduling function for centralized
>  > > schedulers and rte_event_dequeue() as the  scheduling function for
>  > > distributed schedulers? That way, the datapath could be  the simple
>  > > dequeue -> process -> enqueue. Applications would check the
>  > > capability flag at configuration time to decide whether or not to launch an
>  lcore that calls rte_event_schedule().
>  > >
>  > >  I am all for simple "dequeue -> process -> enqueue".
>  > >  rte_event_schedule() added for SW scheduler only,  now it may not
>  > > make sense  to add one more check on top of "rte_event_schedule()"
>  > > to see it is really need  or not in fastpath?
>  > >
>  >
>  > Yes, the additional check shouldn't be needed. In terms of the 'typical
>  workflow' description, this is what I have in mind:
>  >
>  > *
>  >  * An event driven based application has following typical workflow on
>  fastpath:
>  >  * \code{.c}
>  >  *  while (1) {
>  >  *
>  >  *      rte_event_dequeue(...);
>  >  *
>  >  *      (event processing)
>  >  *
>  >  *      rte_event_enqueue(...);
>  >  *  }
>  >  * \endcode
>  >  *
>  >  * The events are injected to event device through the *enqueue*
>  > operation by
>  >  * event producers in the system. The typical event producers are
>  > ethdev
>  >  * subsystem for generating packet events, core(SW) for generating
>  > events based
>  >  * on different stages of application processing, cryptodev for
>  > generating
>  >  * crypto work completion notification etc
>  >  *
>  >  * The *dequeue* operation gets one or more events from the event ports.
>  >  * The application process the events and send to downstream event
>  > queue through
>  >  * rte_event_enqueue() if it is an intermediate stage of event
>  > processing, on
>  >  * the final stage, the application may send to different subsystem
>  > like ethdev
>  >  * to send the packet/event on the wire using ethdev rte_eth_tx_burst() API.
>  >  *
>  >  * The point at which events are scheduled to ports depends on the
>  > device. For
>  >  * hardware devices, scheduling occurs asynchronously. Software
>  > schedulers can
>  >  * either be distributed (each worker thread schedules events to its
>  > own port)
>  >  * or centralized (a dedicated thread schedules to all ports).
>  > Distributed
>  >  * software schedulers perform the scheduling in rte_event_dequeue(),
>  > whereas
>  >  * centralized scheduler logic is located in rte_event_schedule(). The
>  >  * RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag indicates
>  > whether a
>  >  * device is centralized and thus needs a dedicated scheduling thread
>  > that
>  
>  Since we are starting a dedicated thread in centralized case, How about name
>  the flag as RTE_EVENT_DEV_CAP_CENTRALIZED_SCHED?
>  instead of RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED.
>  No strong opinion here. Just a thought.
>  

Fine with me.


More information about the dev mailing list