[dpdk-dev,08/10] event/octeontx: add option to use fpavf as chunk pool

Message ID 20180216213700.3415-9-pbhagavatula@caviumnetworks.com (mailing list archive)
State Superseded, archived
Delegated to: Jerin Jacob
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation fail Compilation issues

Commit Message

Pavan Nikhilesh Feb. 16, 2018, 9:36 p.m. UTC
  Add compile-time configurable option to force TIMvf to use Octeontx
FPAvf pool manager as its chunk pool.
When FPAvf is used as pool manager the TIMvf automatically frees the
chunks to FPAvf through gpool-id.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---
 config/common_base                    |  1 +
 drivers/event/octeontx/timvf_evdev.c  | 23 +++++++++++++++++++++++
 drivers/event/octeontx/timvf_evdev.h  |  3 +++
 drivers/event/octeontx/timvf_worker.h | 35 +++++++++++++++++++++++++++++++++++
 4 files changed, 62 insertions(+)
  

Comments

Santosh Shukla Feb. 18, 2018, 11:42 a.m. UTC | #1
Hi Pavan,


On Saturday 17 February 2018 03:06 AM, Pavan Nikhilesh wrote:
> Add compile-time configurable option to force TIMvf to use Octeontx
> FPAvf pool manager as its chunk pool.
> When FPAvf is used as pool manager the TIMvf automatically frees the
> chunks to FPAvf through gpool-id.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> ---
>  config/common_base                    |  1 +
>  drivers/event/octeontx/timvf_evdev.c  | 23 +++++++++++++++++++++++
>  drivers/event/octeontx/timvf_evdev.h  |  3 +++
>  drivers/event/octeontx/timvf_worker.h | 35 +++++++++++++++++++++++++++++++++++
>  4 files changed, 62 insertions(+)
>
> diff --git a/config/common_base b/config/common_base
> index ad03cf433..00010de92 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -562,6 +562,7 @@ CONFIG_RTE_LIBRTE_PMD_SW_EVENTDEV=y
>  # Compile PMD for octeontx sso event device
>  #
>  CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF=y
> +CONFIG_RTE_PMD_OCTEONTX_TIMVF_USE_FPAVF=n
>  

How about using CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL?

>  #
>  # Compile PMD for OPDL event device
> diff --git a/drivers/event/octeontx/timvf_evdev.c b/drivers/event/octeontx/timvf_evdev.c
> index ffdfbb387..386eaa08f 100644
> --- a/drivers/event/octeontx/timvf_evdev.c
> +++ b/drivers/event/octeontx/timvf_evdev.c
> @@ -162,10 +162,27 @@ timvf_ring_start(const struct rte_event_timer_adapter *adptr)
>  		1ull << 48 |
>  		1ull << 47 |
>  		1ull << 44 |
> +#ifndef RTE_PMD_OCTEONTX_TIMVF_USE_FPAVF
> +		1ull << 43 |
> +#endif
>  		(timr->meta.nb_bkts - 1);
>  
>  	rctrl.rctrl2 = (uint64_t)(TIM_CHUNK_SIZE / 16) << 40;
>  
> +#ifdef RTE_PMD_OCTEONTX_TIMVF_USE_FPAVF
> +	uintptr_t pool;
> +	pool = (uintptr_t)((struct rte_mempool *)
> +			timr->meta.chunk_pool)->pool_id;
> +	ret = octeontx_fpa_bufpool_gpool(pool);
> +	if (ret < 0) {
> +		timvf_log_dbg("Unable to get gaura id");
> +		ret = -ENOMEM;
> +		goto error;
> +	}
> +	timvf_write64((uint64_t)ret,
> +			(uint8_t *)timr->vbar0 + TIM_VRING_AURA);
> +#endif
> +
>  	timvf_write64((uint64_t)timr->meta.bkt,
>  			(uint8_t *)timr->vbar0 + TIM_VRING_BASE);
>  	if (timvf_ring_conf_set(&rctrl, timr->tim_ring_id)) {
> @@ -296,9 +313,15 @@ timvf_ring_create(struct rte_event_timer_adapter *adptr)
>  		return -ENOMEM;
>  	}
>  
> +#ifdef RTE_PMD_OCTEONTX_TIMVF_USE_FPAVF
> +	ret = rte_mempool_set_ops_byname(timr->meta.chunk_pool,
> +			"octeontx_fpavf", NULL);
> +	timvf_log_dbg("Giving back chunks to fpa gaura : %d", ret);
> +#else
>  	ret = rte_mempool_set_ops_byname(timr->meta.chunk_pool,
>  			RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL);
>  	timvf_log_dbg("Not giving back chunks to fpa");
> +#endif
>  

May be use rte_mbuf_best_mempool_ops Or rte_mbuf_user_mempool_ops,
that way avoid above ifdef.

Also a suggestion, Try to reduce ifdef by creating a new header file
called timvf_fpa_evdev.h, abstract all possible ifdefs their, 
create small static inline API and call them in timvf eventdev driver.
[...]

Thanks.
  
Pavan Nikhilesh Feb. 19, 2018, 9:15 a.m. UTC | #2
Hi Santosh,

On Sun, Feb 18, 2018 at 05:12:33PM +0530, santosh wrote:
> Hi Pavan,
>
>
> On Saturday 17 February 2018 03:06 AM, Pavan Nikhilesh wrote:
> > Add compile-time configurable option to force TIMvf to use Octeontx
> > FPAvf pool manager as its chunk pool.
> > When FPAvf is used as pool manager the TIMvf automatically frees the
> > chunks to FPAvf through gpool-id.
> >
> > Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> > ---
> >  config/common_base                    |  1 +
> >  drivers/event/octeontx/timvf_evdev.c  | 23 +++++++++++++++++++++++
> >  drivers/event/octeontx/timvf_evdev.h  |  3 +++
> >  drivers/event/octeontx/timvf_worker.h | 35 +++++++++++++++++++++++++++++++++++
> >  4 files changed, 62 insertions(+)
> >
> > diff --git a/config/common_base b/config/common_base
> > index ad03cf433..00010de92 100644
> > --- a/config/common_base
> > +++ b/config/common_base
> > @@ -562,6 +562,7 @@ CONFIG_RTE_LIBRTE_PMD_SW_EVENTDEV=y
> >  # Compile PMD for octeontx sso event device
> >  #
> >  CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF=y
> > +CONFIG_RTE_PMD_OCTEONTX_TIMVF_USE_FPAVF=n
> >
>
> How about using CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL?

I think CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL is always enabled by default,using
something like CONFIG_RTE_LIBRTE_TIMVF_USE_OCTEONTX_MEMPOOL be better? i.e
telling TIMvf to specifically use fpa.

>
> >  #
> >  # Compile PMD for OPDL event device
> > diff --git a/drivers/event/octeontx/timvf_evdev.c b/drivers/event/octeontx/timvf_evdev.c
> > index ffdfbb387..386eaa08f 100644
> > --- a/drivers/event/octeontx/timvf_evdev.c
> > +++ b/drivers/event/octeontx/timvf_evdev.c
> > @@ -162,10 +162,27 @@ timvf_ring_start(const struct rte_event_timer_adapter *adptr)
> >  		1ull << 48 |
> >  		1ull << 47 |
> >  		1ull << 44 |
> > +#ifndef RTE_PMD_OCTEONTX_TIMVF_USE_FPAVF
> > +		1ull << 43 |
> > +#endif
> >  		(timr->meta.nb_bkts - 1);
> >
> >  	rctrl.rctrl2 = (uint64_t)(TIM_CHUNK_SIZE / 16) << 40;
> >
> > +#ifdef RTE_PMD_OCTEONTX_TIMVF_USE_FPAVF
> > +	uintptr_t pool;
> > +	pool = (uintptr_t)((struct rte_mempool *)
> > +			timr->meta.chunk_pool)->pool_id;
> > +	ret = octeontx_fpa_bufpool_gpool(pool);
> > +	if (ret < 0) {
> > +		timvf_log_dbg("Unable to get gaura id");
> > +		ret = -ENOMEM;
> > +		goto error;
> > +	}
> > +	timvf_write64((uint64_t)ret,
> > +			(uint8_t *)timr->vbar0 + TIM_VRING_AURA);
> > +#endif
> > +
> >  	timvf_write64((uint64_t)timr->meta.bkt,
> >  			(uint8_t *)timr->vbar0 + TIM_VRING_BASE);
> >  	if (timvf_ring_conf_set(&rctrl, timr->tim_ring_id)) {
> > @@ -296,9 +313,15 @@ timvf_ring_create(struct rte_event_timer_adapter *adptr)
> >  		return -ENOMEM;
> >  	}
> >
> > +#ifdef RTE_PMD_OCTEONTX_TIMVF_USE_FPAVF
> > +	ret = rte_mempool_set_ops_byname(timr->meta.chunk_pool,
> > +			"octeontx_fpavf", NULL);
> > +	timvf_log_dbg("Giving back chunks to fpa gaura : %d", ret);
> > +#else
> >  	ret = rte_mempool_set_ops_byname(timr->meta.chunk_pool,
> >  			RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL);
> >  	timvf_log_dbg("Not giving back chunks to fpa");
> > +#endif
> >
>
> May be use rte_mbuf_best_mempool_ops Or rte_mbuf_user_mempool_ops,
> that way avoid above ifdef.

In timvf_worker.h the logic used to arm the event timer changes with the
mempool used i.e. octeontx_fpavf has a different implementation vs ring.
We could use rte_mbuf_best_mempool_ops() but again need check if the
returned ops is fpa to enable automatic buffer recycling.

>
> Also a suggestion, Try to reduce ifdef by creating a new header file
> called timvf_fpa_evdev.h, abstract all possible ifdefs their,
> create small static inline API and call them in timvf eventdev driver.
> [...]

Agreed, will modify arm logic to reduce ifdef clutter.

>
> Thanks.
>

Thanks,
Pavan.
  
Carrillo, Erik G Feb. 23, 2018, 8:17 p.m. UTC | #3
Hi Pavan,

> -----Original Message-----
> From: Pavan Nikhilesh [mailto:pbhagavatula@caviumnetworks.com]
> Sent: Friday, February 16, 2018 3:37 PM
> To: jerin.jacob@caviumnetworks.com;
> santosh.shukla@caviumnetworks.com; Carrillo, Erik G
> <erik.g.carrillo@intel.com>
> Cc: dev@dpdk.org; Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> Subject: [dpdk-dev] [PATCH 08/10] event/octeontx: add option to use fpavf
> as chunk pool
> 
> Add compile-time configurable option to force TIMvf to use Octeontx FPAvf
> pool manager as its chunk pool.
> When FPAvf is used as pool manager the TIMvf automatically frees the
> chunks to FPAvf through gpool-id.
> 
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> ---

<...snipped...>

> @@ -241,7 +243,16 @@ timvf_add_entry_brst(struct timvf_ring *timr, const
> uint16_t rel_bkt,
>  				bkt->first_chunk = (uint64_t) chunk;
>  			}
>  		} else {
> +#ifndef RTE_PMD_OCTEONTX_TIMVF_USE_FPAVF
>  			chunk = timr_clr_bkt(timr, bkt);
> +#else
> +			if (unlikely(rte_mempool_get(timr-
> >meta.chunk_pool,
> +							(void **)&chunk))) {
> +				timr_bkt_set_rem(bkt, 0);
> +				tim[index]->state =
> RTE_EVENT_TIMER_ERROR;
> +				return -ENOMEM;

You return a negative errno value here, but in this case the caller was expecting the number that succeeded.

Regards,
Gabriel

> +			}
> +#endif
>  			bkt->first_chunk = (uint64_t) chunk;
>  		}
>  		*(uint64_t *)(chunk + nb_chunk_slots) = 0; @@ -355,7
> +366,18 @@ timvf_add_entry_sp(struct timvf_ring *timr, const uint32_t

<...snipped...>
  
Pavan Nikhilesh Feb. 26, 2018, 7:25 p.m. UTC | #4
Hi Gabriel,

On Fri, Feb 23, 2018 at 08:17:07PM +0000, Carrillo, Erik G wrote:
> Hi Pavan,
>
> > -----Original Message-----
> > From: Pavan Nikhilesh [mailto:pbhagavatula@caviumnetworks.com]
> > Sent: Friday, February 16, 2018 3:37 PM
> > To: jerin.jacob@caviumnetworks.com;
> > santosh.shukla@caviumnetworks.com; Carrillo, Erik G
> > <erik.g.carrillo@intel.com>
> > Cc: dev@dpdk.org; Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> > Subject: [dpdk-dev] [PATCH 08/10] event/octeontx: add option to use fpavf
> > as chunk pool
> >
> > Add compile-time configurable option to force TIMvf to use Octeontx FPAvf
> > pool manager as its chunk pool.
> > When FPAvf is used as pool manager the TIMvf automatically frees the
> > chunks to FPAvf through gpool-id.
> >
> > Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> > ---
>
> <...snipped...>
>
> > @@ -241,7 +243,16 @@ timvf_add_entry_brst(struct timvf_ring *timr, const
> > uint16_t rel_bkt,
> >  				bkt->first_chunk = (uint64_t) chunk;
> >  			}
> >  		} else {
> > +#ifndef RTE_PMD_OCTEONTX_TIMVF_USE_FPAVF
> >  			chunk = timr_clr_bkt(timr, bkt);
> > +#else
> > +			if (unlikely(rte_mempool_get(timr-
> > >meta.chunk_pool,
> > +							(void **)&chunk))) {
> > +				timr_bkt_set_rem(bkt, 0);
> > +				tim[index]->state =
> > RTE_EVENT_TIMER_ERROR;
> > +				return -ENOMEM;
>
> You return a negative errno value here, but in this case the caller was expecting the number that succeeded.

Agreed, will add a check in `timvf_timer_reg_brst`[1] to check if call to
timvf_add_entry_brst has failed.

[1]http://dpdk.org/dev/patchwork/patch/35206/

>
> Regards,
> Gabriel

Thanks,
Pavan.

>
> > +			}
> > +#endif
> >  			bkt->first_chunk = (uint64_t) chunk;
> >  		}
> >  		*(uint64_t *)(chunk + nb_chunk_slots) = 0; @@ -355,7
> > +366,18 @@ timvf_add_entry_sp(struct timvf_ring *timr, const uint32_t
>
> <...snipped...>
>
  

Patch

diff --git a/config/common_base b/config/common_base
index ad03cf433..00010de92 100644
--- a/config/common_base
+++ b/config/common_base
@@ -562,6 +562,7 @@  CONFIG_RTE_LIBRTE_PMD_SW_EVENTDEV=y
 # Compile PMD for octeontx sso event device
 #
 CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF=y
+CONFIG_RTE_PMD_OCTEONTX_TIMVF_USE_FPAVF=n
 
 #
 # Compile PMD for OPDL event device
diff --git a/drivers/event/octeontx/timvf_evdev.c b/drivers/event/octeontx/timvf_evdev.c
index ffdfbb387..386eaa08f 100644
--- a/drivers/event/octeontx/timvf_evdev.c
+++ b/drivers/event/octeontx/timvf_evdev.c
@@ -162,10 +162,27 @@  timvf_ring_start(const struct rte_event_timer_adapter *adptr)
 		1ull << 48 |
 		1ull << 47 |
 		1ull << 44 |
+#ifndef RTE_PMD_OCTEONTX_TIMVF_USE_FPAVF
+		1ull << 43 |
+#endif
 		(timr->meta.nb_bkts - 1);
 
 	rctrl.rctrl2 = (uint64_t)(TIM_CHUNK_SIZE / 16) << 40;
 
+#ifdef RTE_PMD_OCTEONTX_TIMVF_USE_FPAVF
+	uintptr_t pool;
+	pool = (uintptr_t)((struct rte_mempool *)
+			timr->meta.chunk_pool)->pool_id;
+	ret = octeontx_fpa_bufpool_gpool(pool);
+	if (ret < 0) {
+		timvf_log_dbg("Unable to get gaura id");
+		ret = -ENOMEM;
+		goto error;
+	}
+	timvf_write64((uint64_t)ret,
+			(uint8_t *)timr->vbar0 + TIM_VRING_AURA);
+#endif
+
 	timvf_write64((uint64_t)timr->meta.bkt,
 			(uint8_t *)timr->vbar0 + TIM_VRING_BASE);
 	if (timvf_ring_conf_set(&rctrl, timr->tim_ring_id)) {
@@ -296,9 +313,15 @@  timvf_ring_create(struct rte_event_timer_adapter *adptr)
 		return -ENOMEM;
 	}
 
+#ifdef RTE_PMD_OCTEONTX_TIMVF_USE_FPAVF
+	ret = rte_mempool_set_ops_byname(timr->meta.chunk_pool,
+			"octeontx_fpavf", NULL);
+	timvf_log_dbg("Giving back chunks to fpa gaura : %d", ret);
+#else
 	ret = rte_mempool_set_ops_byname(timr->meta.chunk_pool,
 			RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL);
 	timvf_log_dbg("Not giving back chunks to fpa");
+#endif
 
 	if (ret != 0) {
 		timvf_log_err("Unable to set chunkpool ops.");
diff --git a/drivers/event/octeontx/timvf_evdev.h b/drivers/event/octeontx/timvf_evdev.h
index 5e526a36a..02bd99a34 100644
--- a/drivers/event/octeontx/timvf_evdev.h
+++ b/drivers/event/octeontx/timvf_evdev.h
@@ -24,6 +24,9 @@ 
 #include <rte_reciprocal.h>
 
 #include <octeontx_mbox.h>
+#ifdef RTE_PMD_OCTEONTX_TIMVF_USE_FPAVF
+#include <octeontx_fpavf.h>
+#endif
 
 #define timvf_log(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, otx_logtype_timvf, \
diff --git a/drivers/event/octeontx/timvf_worker.h b/drivers/event/octeontx/timvf_worker.h
index 320eb6ac1..c3f37372a 100644
--- a/drivers/event/octeontx/timvf_worker.h
+++ b/drivers/event/octeontx/timvf_worker.h
@@ -144,6 +144,7 @@  timr_bkt_clr_nent(struct tim_mem_bucket *bktp)
 	return __atomic_and_fetch((uint64_t *)&bktp->w1, v, __ATOMIC_ACQ_REL);
 }
 
+#ifndef RTE_PMD_OCTEONTX_TIMVF_USE_FPAVF
 static inline __hot struct tim_mem_entry*
 timr_clr_bkt(struct timvf_ring *timr, struct tim_mem_bucket *bkt)
 {
@@ -159,6 +160,7 @@  timr_clr_bkt(struct timvf_ring *timr, struct tim_mem_bucket *bkt)
 	}
 	return (struct tim_mem_entry *)bkt->first_chunk;
 }
+#endif
 
 /* Burst mode functions */
 static inline int __hot
@@ -241,7 +243,16 @@  timvf_add_entry_brst(struct timvf_ring *timr, const uint16_t rel_bkt,
 				bkt->first_chunk = (uint64_t) chunk;
 			}
 		} else {
+#ifndef RTE_PMD_OCTEONTX_TIMVF_USE_FPAVF
 			chunk = timr_clr_bkt(timr, bkt);
+#else
+			if (unlikely(rte_mempool_get(timr->meta.chunk_pool,
+							(void **)&chunk))) {
+				timr_bkt_set_rem(bkt, 0);
+				tim[index]->state = RTE_EVENT_TIMER_ERROR;
+				return -ENOMEM;
+			}
+#endif
 			bkt->first_chunk = (uint64_t) chunk;
 		}
 		*(uint64_t *)(chunk + nb_chunk_slots) = 0;
@@ -355,7 +366,18 @@  timvf_add_entry_sp(struct timvf_ring *timr, const uint32_t rel_bkt,
 			}
 			*(uint64_t *)(chunk + nb_chunk_slots) = 0;
 		} else {
+#ifndef RTE_PMD_OCTEONTX_TIMVF_USE_FPAVF
 			chunk = timr_clr_bkt(timr, bkt);
+#else
+			if (unlikely(rte_mempool_get(timr->meta.chunk_pool,
+							(void **)&chunk))) {
+				timr_bkt_set_rem(bkt, 0);
+				tim->impl_opaque[0] =
+					tim->impl_opaque[1] = 0;
+				tim->state = RTE_EVENT_TIMER_ERROR;
+				return -ENOMEM;
+			}
+#endif
 			*(uint64_t *)(chunk + nb_chunk_slots) = 0;
 			bkt->first_chunk = (uint64_t) chunk;
 		}
@@ -438,7 +460,20 @@  timvf_add_entry_mp(struct timvf_ring *timr, const uint32_t rel_bkt,
 				}
 				*(uint64_t *)(chunk + nb_chunk_slots) = 0;
 			} else {
+#ifndef RTE_PMD_OCTEONTX_TIMVF_USE_FPAVF
 				chunk = timr_clr_bkt(timr, bkt);
+#else
+				if (unlikely(rte_mempool_get(
+							timr->meta.chunk_pool,
+							(void **)&chunk))) {
+					timr_bkt_set_rem(bkt, 0);
+					timr_bkt_dec_lock(bkt);
+					tim->impl_opaque[0] =
+						tim->impl_opaque[1] = 0;
+					tim->state = RTE_EVENT_TIMER_ERROR;
+					return -ENOMEM;
+				}
+#endif
 				*(uint64_t *)(chunk + nb_chunk_slots) = 0;
 				bkt->first_chunk = (uint64_t) chunk;
 			}