[dpdk-dev,v2] eventdev: remove default queue overriding

Message ID 1489159155-80489-1-git-send-email-harry.van.haaren@intel.com (mailing list archive)
State Accepted, archived
Delegated to: Jerin Jacob
Headers

Checks

Context Check Description
ci/Intel-compilation success Compilation OK
ci/checkpatch success coding style OK

Commit Message

Van Haaren, Harry March 10, 2017, 3:19 p.m. UTC
  PMDs that only do a specific type of scheduling cannot provide
CFG_ALL_TYPES, so the Eventdev infrastructure should not demand
that every PMD supports CFG_ALL_TYPES.

By not overriding the default configuration of the queue as
suggested by the PMD, the eventdev_common unit tests can pass
on all PMDs, regardless of their capabilities.

RTE_EVENT_QUEUE_CFG_DEFAULT is no longer used by the eventdev layer
it can be removed now. Applications should use CFG_ALL_TYPES
if they require enqueue of all types a queue, or specify which
type of queue they require.

The CFG_DEFAULT value is changed to CFG_ALL_TYPES in event/skeleton,
to not break the compile.

A capability flag is added that indicates if the underlying PMD
supports creating queues of ALL_TYPES.

Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>

---

v2:
- added capability flag to indicate if PMD supports ALL_TYPES

---
 drivers/event/skeleton/skeleton_eventdev.c |  2 +-
 lib/librte_eventdev/rte_eventdev.c         |  1 -
 lib/librte_eventdev/rte_eventdev.h         | 13 +++++++------
 3 files changed, 8 insertions(+), 8 deletions(-)
  

Comments

Jerin Jacob March 21, 2017, 8:21 a.m. UTC | #1
On Fri, Mar 10, 2017 at 03:19:15PM +0000, Harry van Haaren wrote:
> PMDs that only do a specific type of scheduling cannot provide
> CFG_ALL_TYPES, so the Eventdev infrastructure should not demand
> that every PMD supports CFG_ALL_TYPES.
> 
> By not overriding the default configuration of the queue as
> suggested by the PMD, the eventdev_common unit tests can pass
> on all PMDs, regardless of their capabilities.
> 
> RTE_EVENT_QUEUE_CFG_DEFAULT is no longer used by the eventdev layer
> it can be removed now. Applications should use CFG_ALL_TYPES
> if they require enqueue of all types a queue, or specify which
> type of queue they require.
> 
> The CFG_DEFAULT value is changed to CFG_ALL_TYPES in event/skeleton,
> to not break the compile.
> 
> A capability flag is added that indicates if the underlying PMD
> supports creating queues of ALL_TYPES.
> 
> Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>

I think It is reasonable to have this capability if SW PMD can not
support it for performance reasons. The only downside is, In application
side there will be changes in the fast path.I think, The reasonable
trade-off between performance and portability to keep packet processing
functions as common and keep the pipeline advancement logic as different
main loop based on capability.

Two reasons why CFG_ALL_TYPES important for HW
- Event queue is the precious resource and it is very limited and it
  consumes power and internal resources like SRAM
- The use case like flow based event pipelining will not have constraint
  on which event queues it enqueues to

I think We can add this capability flag now and once we have performance
and latency test cases for eventdev then we can check is there any scope
for improvement in SW PMD.With that note,

Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

> 
> ---
> 
> v2:
> - added capability flag to indicate if PMD supports ALL_TYPES
> 
> ---
>  drivers/event/skeleton/skeleton_eventdev.c |  2 +-
>  lib/librte_eventdev/rte_eventdev.c         |  1 -
>  lib/librte_eventdev/rte_eventdev.h         | 13 +++++++------
>  3 files changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
> index dee0faf..308e28e 100644
> --- a/drivers/event/skeleton/skeleton_eventdev.c
> +++ b/drivers/event/skeleton/skeleton_eventdev.c
> @@ -196,7 +196,7 @@ skeleton_eventdev_queue_def_conf(struct rte_eventdev *dev, uint8_t queue_id,
>  
>  	queue_conf->nb_atomic_flows = (1ULL << 20);
>  	queue_conf->nb_atomic_order_sequences = (1ULL << 20);
> -	queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_DEFAULT;
> +	queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_ALL_TYPES;
>  	queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
>  }
>  
> diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
> index 68bfc3b..c32a776 100644
> --- a/lib/librte_eventdev/rte_eventdev.c
> +++ b/lib/librte_eventdev/rte_eventdev.c
> @@ -593,7 +593,6 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
>  		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_def_conf,
>  					-ENOTSUP);
>  		(*dev->dev_ops->queue_def_conf)(dev, queue_id, &def_conf);
> -		def_conf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_DEFAULT;
>  		queue_conf = &def_conf;
>  	}
>  
> diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
> index 7073987..4c73a82 100644
> --- a/lib/librte_eventdev/rte_eventdev.h
> +++ b/lib/librte_eventdev/rte_eventdev.h
> @@ -271,6 +271,13 @@ struct rte_mbuf; /* we just use mbuf pointers; no need to include rte_mbuf.h */
>   *
>   * @see rte_event_schedule(), rte_event_dequeue_burst()
>   */
> +#define RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES     (1ULL << 3)
> +/**< Event device is capable of enqueuing events of any type to any queue.
> + * If this capability is not set, the queue only supports events of the
> + *  *RTE_EVENT_QUEUE_CFG_* type that it was created with.
> + *
> + * @see RTE_EVENT_QUEUE_CFG_* values
> + */
>  
>  /* Event device priority levels */
>  #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
> @@ -471,12 +478,6 @@ rte_event_dev_configure(uint8_t dev_id,
>  /* Event queue specific APIs */
>  
>  /* Event queue configuration bitmap flags */
> -#define RTE_EVENT_QUEUE_CFG_DEFAULT            (0)
> -/**< Default value of *event_queue_cfg* when rte_event_queue_setup() invoked
> - * with queue_conf == NULL
> - *
> - * @see rte_event_queue_setup()
> - */
>  #define RTE_EVENT_QUEUE_CFG_TYPE_MASK          (3ULL << 0)
>  /**< Mask for event queue schedule type configuration request */
>  #define RTE_EVENT_QUEUE_CFG_ALL_TYPES          (0ULL << 0)
> -- 
> 2.7.4
>
  
Jerin Jacob March 23, 2017, 10:17 a.m. UTC | #2
On Tue, Mar 21, 2017 at 01:51:45PM +0530, Jerin Jacob wrote:
> On Fri, Mar 10, 2017 at 03:19:15PM +0000, Harry van Haaren wrote:
> > PMDs that only do a specific type of scheduling cannot provide
> > CFG_ALL_TYPES, so the Eventdev infrastructure should not demand
> > that every PMD supports CFG_ALL_TYPES.
> > 
> > By not overriding the default configuration of the queue as
> > suggested by the PMD, the eventdev_common unit tests can pass
> > on all PMDs, regardless of their capabilities.
> > 
> > RTE_EVENT_QUEUE_CFG_DEFAULT is no longer used by the eventdev layer
> > it can be removed now. Applications should use CFG_ALL_TYPES
> > if they require enqueue of all types a queue, or specify which
> > type of queue they require.
> > 
> > The CFG_DEFAULT value is changed to CFG_ALL_TYPES in event/skeleton,
> > to not break the compile.
> > 
> > A capability flag is added that indicates if the underlying PMD
> > supports creating queues of ALL_TYPES.
> > 
> > Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
> 
> I think It is reasonable to have this capability if SW PMD can not
> support it for performance reasons. The only downside is, In application
> side there will be changes in the fast path.I think, The reasonable
> trade-off between performance and portability to keep packet processing
> functions as common and keep the pipeline advancement logic as different
> main loop based on capability.
> 
> Two reasons why CFG_ALL_TYPES important for HW
> - Event queue is the precious resource and it is very limited and it
>   consumes power and internal resources like SRAM
> - The use case like flow based event pipelining will not have constraint
>   on which event queues it enqueues to
> 
> I think We can add this capability flag now and once we have performance
> and latency test cases for eventdev then we can check is there any scope
> for improvement in SW PMD.With that note,
> 
> Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

Applied to dpdk-next-eventdev/master. Thanks.


> 
> > 
> > ---
> > 
> > v2:
> > - added capability flag to indicate if PMD supports ALL_TYPES
> > 
> > ---
> >  drivers/event/skeleton/skeleton_eventdev.c |  2 +-
> >  lib/librte_eventdev/rte_eventdev.c         |  1 -
> >  lib/librte_eventdev/rte_eventdev.h         | 13 +++++++------
> >  3 files changed, 8 insertions(+), 8 deletions(-)
> > 
> > diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
> > index dee0faf..308e28e 100644
> > --- a/drivers/event/skeleton/skeleton_eventdev.c
> > +++ b/drivers/event/skeleton/skeleton_eventdev.c
> > @@ -196,7 +196,7 @@ skeleton_eventdev_queue_def_conf(struct rte_eventdev *dev, uint8_t queue_id,
> >  
> >  	queue_conf->nb_atomic_flows = (1ULL << 20);
> >  	queue_conf->nb_atomic_order_sequences = (1ULL << 20);
> > -	queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_DEFAULT;
> > +	queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_ALL_TYPES;
> >  	queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
> >  }
> >  
> > diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
> > index 68bfc3b..c32a776 100644
> > --- a/lib/librte_eventdev/rte_eventdev.c
> > +++ b/lib/librte_eventdev/rte_eventdev.c
> > @@ -593,7 +593,6 @@ rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
> >  		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_def_conf,
> >  					-ENOTSUP);
> >  		(*dev->dev_ops->queue_def_conf)(dev, queue_id, &def_conf);
> > -		def_conf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_DEFAULT;
> >  		queue_conf = &def_conf;
> >  	}
> >  
> > diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
> > index 7073987..4c73a82 100644
> > --- a/lib/librte_eventdev/rte_eventdev.h
> > +++ b/lib/librte_eventdev/rte_eventdev.h
> > @@ -271,6 +271,13 @@ struct rte_mbuf; /* we just use mbuf pointers; no need to include rte_mbuf.h */
> >   *
> >   * @see rte_event_schedule(), rte_event_dequeue_burst()
> >   */
> > +#define RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES     (1ULL << 3)
> > +/**< Event device is capable of enqueuing events of any type to any queue.
> > + * If this capability is not set, the queue only supports events of the
> > + *  *RTE_EVENT_QUEUE_CFG_* type that it was created with.
> > + *
> > + * @see RTE_EVENT_QUEUE_CFG_* values
> > + */
> >  
> >  /* Event device priority levels */
> >  #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
> > @@ -471,12 +478,6 @@ rte_event_dev_configure(uint8_t dev_id,
> >  /* Event queue specific APIs */
> >  
> >  /* Event queue configuration bitmap flags */
> > -#define RTE_EVENT_QUEUE_CFG_DEFAULT            (0)
> > -/**< Default value of *event_queue_cfg* when rte_event_queue_setup() invoked
> > - * with queue_conf == NULL
> > - *
> > - * @see rte_event_queue_setup()
> > - */
> >  #define RTE_EVENT_QUEUE_CFG_TYPE_MASK          (3ULL << 0)
> >  /**< Mask for event queue schedule type configuration request */
> >  #define RTE_EVENT_QUEUE_CFG_ALL_TYPES          (0ULL << 0)
> > -- 
> > 2.7.4
> >
  

Patch

diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
index dee0faf..308e28e 100644
--- a/drivers/event/skeleton/skeleton_eventdev.c
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -196,7 +196,7 @@  skeleton_eventdev_queue_def_conf(struct rte_eventdev *dev, uint8_t queue_id,
 
 	queue_conf->nb_atomic_flows = (1ULL << 20);
 	queue_conf->nb_atomic_order_sequences = (1ULL << 20);
-	queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_DEFAULT;
+	queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_ALL_TYPES;
 	queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
 }
 
diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index 68bfc3b..c32a776 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -593,7 +593,6 @@  rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
 		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_def_conf,
 					-ENOTSUP);
 		(*dev->dev_ops->queue_def_conf)(dev, queue_id, &def_conf);
-		def_conf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_DEFAULT;
 		queue_conf = &def_conf;
 	}
 
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index 7073987..4c73a82 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -271,6 +271,13 @@  struct rte_mbuf; /* we just use mbuf pointers; no need to include rte_mbuf.h */
  *
  * @see rte_event_schedule(), rte_event_dequeue_burst()
  */
+#define RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES     (1ULL << 3)
+/**< Event device is capable of enqueuing events of any type to any queue.
+ * If this capability is not set, the queue only supports events of the
+ *  *RTE_EVENT_QUEUE_CFG_* type that it was created with.
+ *
+ * @see RTE_EVENT_QUEUE_CFG_* values
+ */
 
 /* Event device priority levels */
 #define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
@@ -471,12 +478,6 @@  rte_event_dev_configure(uint8_t dev_id,
 /* Event queue specific APIs */
 
 /* Event queue configuration bitmap flags */
-#define RTE_EVENT_QUEUE_CFG_DEFAULT            (0)
-/**< Default value of *event_queue_cfg* when rte_event_queue_setup() invoked
- * with queue_conf == NULL
- *
- * @see rte_event_queue_setup()
- */
 #define RTE_EVENT_QUEUE_CFG_TYPE_MASK          (3ULL << 0)
 /**< Mask for event queue schedule type configuration request */
 #define RTE_EVENT_QUEUE_CFG_ALL_TYPES          (0ULL << 0)