[PATCH v2 04/11] eventdev: cleanup doxygen comments on info structure

Bruce Richardson bruce.richardson at intel.com
Wed Jan 31 15:37:47 CET 2024


On Wed, Jan 24, 2024 at 12:51:03PM +0100, Mattias Rönnblom wrote:
> On 2024-01-23 10:43, Bruce Richardson wrote:
> > On Tue, Jan 23, 2024 at 10:35:02AM +0100, Mattias Rönnblom wrote:
> > > On 2024-01-19 18:43, Bruce Richardson wrote:
> > > > Some small rewording changes to the doxygen comments on struct
> > > > rte_event_dev_info.
> > > > 
> > > > Signed-off-by: Bruce Richardson <bruce.richardson at intel.com>
> > > > ---
> > > >    lib/eventdev/rte_eventdev.h | 46 ++++++++++++++++++++-----------------
> > > >    1 file changed, 25 insertions(+), 21 deletions(-)
> > > > 
> > > > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > > > index 57a2791946..872f241df2 100644
> > > > --- a/lib/eventdev/rte_eventdev.h
> > > > +++ b/lib/eventdev/rte_eventdev.h
> > > > @@ -482,54 +482,58 @@ struct rte_event_dev_info {
> > > >    	const char *driver_name;	/**< Event driver name */
> > > >    	struct rte_device *dev;	/**< Device information */
> > > >    	uint32_t min_dequeue_timeout_ns;
> > > > -	/**< Minimum supported global dequeue timeout(ns) by this device */
> > > > +	/**< Minimum global dequeue timeout(ns) supported by this device */
> > > 
> > > Are we missing a bunch of "." here and in the other fields?
> > > 
> > > >    	uint32_t max_dequeue_timeout_ns;
> > > > -	/**< Maximum supported global dequeue timeout(ns) by this device */
> > > > +	/**< Maximum global dequeue timeout(ns) supported by this device */
> > > >    	uint32_t dequeue_timeout_ns;
> > > >    	/**< Configured global dequeue timeout(ns) for this device */
> > > >    	uint8_t max_event_queues;
> > > > -	/**< Maximum event_queues supported by this device */
> > > > +	/**< Maximum event queues supported by this device */
> > > >    	uint32_t max_event_queue_flows;
> > > > -	/**< Maximum supported flows in an event queue by this device*/
> > > > +	/**< Maximum number of flows within an event queue supported by this device*/
> > > >    	uint8_t max_event_queue_priority_levels;
> > > >    	/**< Maximum number of event queue priority levels by this device.
> > > > -	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
> > > > +	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability.
> > > > +	 * The priority levels are evenly distributed between
> > > > +	 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST.
> > > 
> > > This is a change of the API, in the sense it's defining something previously
> > > left undefined?
> > > 
> > 
> > Well, undefined is pretty useless for app writers, no?
> > However, agreed that the range between HIGHEST and LOWEST is an assumption
> > on my part, chosen because it matches what happens to the event priorities
> > which are documented in event struct as "The implementation shall normalize
> >   the requested priority to supported priority value" - which, while better
> > than nothing, does technically leave the details of how normalization
> > occurs up to the implementation.
> > 
> > > If you need 6 different priority levels in an app, how do you go about
> > > making sure you find the correct (distinct) Eventdev levels on any event
> > > device supporting >= 6 levels?
> > > 
> > > #define NUM_MY_LEVELS 6
> > > 
> > > #define MY_LEVEL_TO_EVENTDEV_LEVEL(my_level) (((my_level) *
> > > (RTE_EVENT_DEV_PRIORITY_HIGHEST-RTE_EVENT_DEV_PRIORTY_LOWEST) /
> > > NUM_MY_LEVELS)
> > > 
> > > This way? One would worry a bit exactly what "evenly" means, in terms of
> > > rounding errors. If you have an event device with 255 priority levels of
> > > (say) 256 levels available in the API, which two levels are the same
> > > priority?
> > 
> > Yes, round etc. will be an issue in cases of non-powers-of 2.
> > However, I think we do need to clarify this behaviour, so I'm open to
> > alternative suggestions as to how update this.
> > 
> 
> In retrospect, maybe it would have been better to just express the number of
> priority levels an event device supported, only allow [0, max_levels - 1] in
> the prio field, and leave it to the app to do the conversion/normalization.
>

Yes, in many ways that would be better.
 
> Maybe a new <rte_eventdev.h> helper macro would at least suggest to the PMD
> driver implementer and the application designer how this normalization
> should work. Something like the above, but where NUM_MY_LEVELS is an input
> parameter. Would result in an integer division though, so shouldn't be used
> in the fast path.

I think it's actually ok now, having the drivers do the work, since each
driver can choose optimal method. For those having power-of-2 number of
priorities, just a shift op works best.

The key thing for the documentation here, to my mind, is to make it clear
that though the number of priorities is N, you still specify the relative
priorities in the range of 0-255. This is documented in the queue
configuration, so, while we could leave it unmentionned here, I think for
clarity it should be called out. I'm going to reword slightly as:

 * The implementation shall normalize priority values specified between
 * @ref RTE_EVENT_DEV_PRIORITY_HIGHEST and @ref RTE_EVENT_DEV_PRIORITY_LOWEST
 * to map them internally to this range of priorities.
 *
 * @see rte_event_queue_conf.priority

This way the wording in the two places is similar.

/Bruce


More information about the dev mailing list