[dpdk-dev] [RFC] [PATCH v2] libeventdev: event driven programming model framework for DPDK

Bruce Richardson bruce.richardson at intel.com
Wed Nov 2 12:45:07 CET 2016


On Wed, Nov 02, 2016 at 04:17:04PM +0530, Jerin Jacob wrote:
> On Wed, Oct 26, 2016 at 12:11:03PM +0000, Van Haaren, Harry wrote:
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Jerin Jacob
> > > 
> > > So far, I have received constructive feedback from Intel, NXP and Linaro folks.
> > > Let me know, if anyone else interested in contributing to the definition of eventdev?
> > > 
> > > If there are no major issues in proposed spec, then Cavium would like work on
> > > implementing and up-streaming the common code(lib/librte_eventdev/) and
> > > an associated HW driver.(Requested minor changes of v2 will be addressed
> > > in next version).
> >
> 
> Hi All,
> 
> Two queries,
> 
> 1) In SW implementation, Is their any connection between "struct
> rte_event_port_conf"'s dequeue_queue_depth and enqueue_queue_depth ?
> i.e it should be enqueue_queue_depth >= dequeue_queue_depth. Right ?
> Thought of adding the common checks in common layer.

I think this is probably best left to the driver layers to enforce. For
us, such a restriction doesn't really make sense, though in many cases
that would be the usual setup. For accurate load balancing, the dequeue
queue depth would be small, and the burst size would probably equal the
queue depth, meaning the enqueue depth needs to be at least as big.
However, for better throughput, or in cases where all traffic is being
coalesced to a single core e.g. for transmit out a network port, there
is no need to keep the dequeue queue shallow and so it can be many times
the burst size, while the enqueue queue can be kept to 1-2 times the
burst size.

> 
> 2)Any comments on follow item(section under ----) that needs improvement.
> -------------------------------------------------------------------------------
> Abstract the differences in event QoS management with different
> priority schemes available in different HW or SW implementations with portable
> application workflow.
> 
> Based on the feedback, there three different kinds of QoS support
> available in
> three different HW or SW implementations.
> 1) Priority associated with the event queue
> 2) Priority associated with each event enqueue
> (Same flow can have two different priority on two separate enqueue)
> 3) Priority associated with the flow(each flow has unique priority)
> 
> In v2, The differences abstracted based on device capability
> (RTE_EVENT_DEV_CAP_QUEUE_QOS for the first scheme,
> RTE_EVENT_DEV_CAP_EVENT_QOS for the second and third scheme).
> This scheme would call for different application workflow for
> nontrivial QoS-enabled applications.
> -------------------------------------------------------------------------------
> After thinking a while, I think, RTE_EVENT_DEV_CAP_EVENT_QOS is a
> super-set.if so, the subset RTE_EVENT_DEV_CAP_QUEUE_QOS can be
> implemented with RTE_EVENT_DEV_CAP_EVENT_QOS. i.e We may not need two
> flags, Just one flag RTE_EVENT_DEV_CAP_EVENT_QOS is enough to fix
> portability issue with basic QoS enabled applications.
> 
> i.e Introduce RTE_EVENT_DEV_CAP_EVENT_QOS as config option in device
> configure stage if application needs fine granularity on QoS per event
> enqueue.For trivial applications, configured
> rte_event_queue_conf->priority can be used as rte_event_enqueue(struct
> rte_event.priority)
> 
So all implementations should support the concept of priority among
queues, and then there is optional support for event or flow based
prioritization. Is that a correct interpretation of what you propose?

/Bruce



More information about the dev mailing list