[PATCH v2 03/11] eventdev: update documentation on device capability flags

Bruce Richardson bruce.richardson at intel.com
Tue Jan 23 10:34:18 CET 2024


On Tue, Jan 23, 2024 at 10:18:53AM +0100, Mattias Rönnblom wrote:
> On 2024-01-19 18:43, Bruce Richardson wrote:
> > Update the device capability docs, to:
> > 
> > * include more cross-references
> > * split longer text into paragraphs, in most cases with each flag having
> >    a single-line summary at the start of the doc block
> > * general comment rewording and clarification as appropriate
> > 
> > Signed-off-by: Bruce Richardson <bruce.richardson at intel.com>
> > ---
> >   lib/eventdev/rte_eventdev.h | 130 ++++++++++++++++++++++++++----------
> >   1 file changed, 93 insertions(+), 37 deletions(-)
> > 
> > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > index 949e957f1b..57a2791946 100644
> > --- a/lib/eventdev/rte_eventdev.h
> > +++ b/lib/eventdev/rte_eventdev.h
> > @@ -243,143 +243,199 @@ struct rte_event;
> >   /* Event device capability bitmap flags */
> >   #define RTE_EVENT_DEV_CAP_QUEUE_QOS           (1ULL << 0)
> >   /**< Event scheduling prioritization is based on the priority and weight
> > - * associated with each event queue. Events from a queue with highest priority
> > - * is scheduled first. If the queues are of same priority, weight of the queues
> > + * associated with each event queue.
> > + *
> > + * Events from a queue with highest priority
> > + * are scheduled first. If the queues are of same priority, weight of the queues
> >    * are considered to select a queue in a weighted round robin fashion.
> >    * Subsequent dequeue calls from an event port could see events from the same
> >    * event queue, if the queue is configured with an affinity count. Affinity
> >    * count is the number of subsequent dequeue calls, in which an event port
> >    * should use the same event queue if the queue is non-empty
> >    *
> 
> Maybe the subject for a future documentation patch: but what happens to
> order maintenance for different-priority events. I've always assumed events
> on atomic/ordered queues where only ordered in the flow_id within the same
> priority, not flow_id alone.
> 

Agree with this. If events with the same flow_id are spread across two
priority levels, they are not the same flow. I'll try and clarify this in
v3.

> > + * NOTE: A device may use both queue prioritization and event prioritization
> > + * (@ref RTE_EVENT_DEV_CAP_EVENT_QOS capability) when making packet scheduling decisions.
> > + *
> >    *  @see rte_event_queue_setup(), rte_event_queue_attr_set()
> >    */
> >   #define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
> >   /**< Event scheduling prioritization is based on the priority associated with
> > - *  each event. Priority of each event is supplied in *rte_event* structure
> > + *  each event.
> > + *
> > + *  Priority of each event is supplied in *rte_event* structure
> >    *  on each enqueue operation.
> > + *  If this capability is not set, the priority field of the event structure
> > + *  is ignored for each event.
> >    *
> > + * NOTE: A device may use both queue prioritization (@ref RTE_EVENT_DEV_CAP_QUEUE_QOS capability)
> > + * and event prioritization when making packet scheduling decisions.
> > +
> >    *  @see rte_event_enqueue_burst()
> >    */
> >   #define RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED   (1ULL << 2)
> >   /**< Event device operates in distributed scheduling mode.
> > + *
> >    * In distributed scheduling mode, event scheduling happens in HW or
> > - * rte_event_dequeue_burst() or the combination of these two.
> > + * rte_event_dequeue_burst() / rte_event_enqueue_burst() or the combination of these two.
> >    * If the flag is not set then eventdev is centralized and thus needs a
> >    * dedicated service core that acts as a scheduling thread .
> >    *
> > - * @see rte_event_dequeue_burst()
> > + * @see rte_event_dev_service_id_get
> >    */
> >   #define RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES     (1ULL << 3)
> >   /**< Event device is capable of enqueuing events of any type to any queue.
> > + *
> >    * If this capability is not set, the queue only supports events of the
> > - *  *RTE_SCHED_TYPE_* type that it was created with.
> > + * *RTE_SCHED_TYPE_* type that it was created with.
> > + * Any events of other types scheduled to the queue will handled in an
> > + * implementation-dependent manner. They may be dropped by the
> > + * event device, or enqueued with the scheduling type adjusted to the
> > + * correct/supported value.
> 
> Having the application setting sched_type when it was already set on a the
> level of the queue never made sense to me.
> 
> I can't see any reasons why this field shouldn't be ignored by the event
> device on non-RTE_EVENT_QUEUE_CFG_ALL_TYPES queues.
> 
> If the behavior is indeed undefined, I think it's better to just say
> "undefined" rather than the above speculation.
> 

+1, I completely agree with ignoring for fixed-type queues. Saves drivers
checking.

The reason I didn't put that in was a desire to minimise possible
semantic changes, but I think later on the patchset my desire to avoid such
changes waned and I have included more "severe" changes than I originally
would like. [The changes in "release" events on ordered queues being the
big one I'm aware of, that I should really have held back to a separate
dedicated patch/patchset]

Unless someone objects, I'll update that in a v3. However, many of these
subtle changes may mean updates to drivers, so how we go about clarifying
things and getting drivers compatible is something we need to think about.
We should probably target 24.11 as the point at which we should have all
behaviour clarified, and drivers updated if possible. There are so many
point of ambiguity - especially in error cases - I expect we may have some
work to do to get all aligned.

/Bruce


More information about the dev mailing list