[dpdk-dev] [PATCH 2/4] eventdev: implement the northbound APIs
Jerin Jacob
jerin.jacob at caviumnetworks.com
Tue Nov 29 03:01:48 CET 2016
On Mon, Nov 28, 2016 at 03:53:08PM +0000, Eads, Gage wrote:
> (Bruce's adviced heeded :))
>
> > > > >
> > > > > How would this check work? Wouldn't it prevent any core from
> > > > running the software scheduler in the centralized case?
> > > >
> > > > I guess you may not need RTE_EVENT_DEV_CAP here, instead need flag
> > > > for device configure here
> > > >
> > > > #define RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED (1ULL << 1)
> > > >
> > > > struct rte_event_dev_config config; config.event_dev_cfg =
> > > > RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED;
> > > > rte_event_dev_configure(.., &config);
> > > >
> > > > on the driver side on configure,
> > > > if (config.event_dev_cfg & RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED)
> > > > eventdev->schedule = NULL;
> > > > else // centralized case
> > > > eventdev->schedule = your_centrized_schedule_function;
> > > >
> > > > Does that work?
> > >
> > > Hm, I fear the API would give users the impression that they can select the
> > scheduling behavior of a given eventdev, when a software scheduler is more
> > likely to be either distributed or centralized -- not both.
> >
> > Even if it is capability flag then also it is per "device". Right ?
> > capability flag is more of read only too. Am i missing something here?
> >
>
> Correct, the capability flag I'm envisioning is per-device and read-only.
>
> > >
> > > What if we use the capability flag, and define rte_event_schedule() as the
> > scheduling function for centralized schedulers and rte_event_dequeue() as the
> > scheduling function for distributed schedulers? That way, the datapath could be
> > the simple dequeue -> process -> enqueue. Applications would check the
> > capability flag at configuration time to decide whether or not to launch an
> > lcore that calls rte_event_schedule().
> >
> > I am all for simple "dequeue -> process -> enqueue".
> > rte_event_schedule() added for SW scheduler only, now it may not make sense
> > to add one more check on top of "rte_event_schedule()" to see it is really need
> > or not in fastpath?
> >
>
> Yes, the additional check shouldn't be needed. In terms of the 'typical workflow' description, this is what I have in mind:
>
> *
> * An event driven based application has following typical workflow on fastpath:
> * \code{.c}
> * while (1) {
> *
> * rte_event_dequeue(...);
> *
> * (event processing)
> *
> * rte_event_enqueue(...);
> * }
> * \endcode
> *
> * The point at which events are scheduled to ports depends on the device. For
> * hardware devices, scheduling occurs asynchronously. Software schedulers can
> * either be distributed (each worker thread schedules events to its own port)
> * or centralized (a dedicated thread schedules to all ports). Distributed
> * software schedulers perform the scheduling in rte_event_dequeue(), whereas
> * centralized scheduler logic is located in rte_event_schedule(). The
> * RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag indicates whether a
> * device is centralized and thus needs a dedicated scheduling thread that
> * repeatedly calls rte_event_schedule().
Makes sense. I will change the existing schedule description to the
proposed one and add RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag
in v2.
Thanks Gage.
> *
> */
More information about the dev
mailing list