[v1,1/1] mempool/octeontx2: fix npa pool range errors

Message ID 20190705103341.30219-1-vattunuru@marvell.com (mailing list archive)
State Changes Requested, archived
Delegated to: Thomas Monjalon
Headers
Series [v1,1/1] mempool/octeontx2: fix npa pool range errors |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/mellanox-Performance-Testing success Performance Testing PASS
ci/intel-Performance-Testing success Performance Testing PASS
ci/Intel-compilation fail apply issues

Commit Message

Vamsi Krishna Attunuru July 5, 2019, 10:33 a.m. UTC
  From: Vamsi Attunuru <vattunuru@marvell.com>

Patch fixes npa pool range errors observed while creating mempool.
During mempool creation, octeontx2 mempool driver populates pool
range fields before enqueueing the buffers. If any enqueue or dequeue
operation reaches npa hardware prior to the range field's HW context
update, those ops result in npa range errors. Patch adds a routine
to read back HW context and verify if range fields are updated or not.

Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
 drivers/mempool/octeontx2/otx2_mempool_ops.c | 37 ++++++++++++++++++++++++++++
 1 file changed, 37 insertions(+)
  

Comments

Jerin Jacob Kollanukkaran July 7, 2019, 2:21 p.m. UTC | #1
> -----Original Message-----
> From: vattunuru@marvell.com <vattunuru@marvell.com>
> Sent: Friday, July 5, 2019 4:04 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; Jerin Jacob Kollanukkaran <jerinj@marvell.com>;
> Vamsi Krishna Attunuru <vattunuru@marvell.com>
> Subject: [PATCH v1 1/1] mempool/octeontx2: fix npa pool range errors
> 
> From: Vamsi Attunuru <vattunuru@marvell.com>
> 
> Patch fixes npa pool range errors observed while creating mempool.
> During mempool creation, octeontx2 mempool driver populates pool range
> fields before enqueueing the buffers. If any enqueue or dequeue operation
> reaches npa hardware prior to the range field's HW context update, those ops
> result in npa range errors. Patch adds a routine to read back HW context and
> verify if range fields are updated or not.
> 
> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>


1) Please fix chek-git-log.sh

$ ./devtools/check-git-log.sh
Missing 'Fixes' tag:
        mempool/octeontx2: fix npa pool range errors

2) Please mention this issue happens when mempool objects are
from different mempool in git commit log
  
Thomas Monjalon July 7, 2019, 5:24 p.m. UTC | #2
07/07/2019 16:21, Jerin Jacob Kollanukkaran:
> 
> > -----Original Message-----
> > From: vattunuru@marvell.com <vattunuru@marvell.com>
> > Sent: Friday, July 5, 2019 4:04 PM
> > To: dev@dpdk.org
> > Cc: thomas@monjalon.net; Jerin Jacob Kollanukkaran <jerinj@marvell.com>;
> > Vamsi Krishna Attunuru <vattunuru@marvell.com>
> > Subject: [PATCH v1 1/1] mempool/octeontx2: fix npa pool range errors
> > 
> > From: Vamsi Attunuru <vattunuru@marvell.com>
> > 
> > Patch fixes npa pool range errors observed while creating mempool.
> > During mempool creation, octeontx2 mempool driver populates pool range
> > fields before enqueueing the buffers. If any enqueue or dequeue operation
> > reaches npa hardware prior to the range field's HW context update, those ops
> > result in npa range errors. Patch adds a routine to read back HW context and
> > verify if range fields are updated or not.
> > 
> > Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
> 
> 
> 1) Please fix chek-git-log.sh
> 
> $ ./devtools/check-git-log.sh
> Missing 'Fixes' tag:
>         mempool/octeontx2: fix npa pool range errors
> 
> 2) Please mention this issue happens when mempool objects are
> from different mempool in git commit log

One more comment, the title is supposed to say which behaviour
it is fixing, not the root cause.
  

Patch

diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c
index e1764b0..a60a77a 100644
--- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
+++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c
@@ -600,6 +600,40 @@  npa_lf_aura_pool_pair_free(struct otx2_npa_lf *lf, uint64_t aura_handle)
 }
 
 static int
+npa_lf_aura_range_update_check(uint64_t aura_handle)
+{
+	uint64_t aura_id = npa_lf_aura_handle_to_aura(aura_handle);
+	struct otx2_npa_lf *lf = otx2_npa_lf_obj_get();
+	struct npa_aura_lim *lim = lf->aura_lim;
+	struct npa_aq_enq_req *req;
+	struct npa_aq_enq_rsp *rsp;
+	struct npa_pool_s *pool;
+	int rc;
+
+	req  = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox);
+
+	req->aura_id = aura_id;
+	req->ctype = NPA_AQ_CTYPE_POOL;
+	req->op = NPA_AQ_INSTOP_READ;
+
+	rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp);
+	if (rc) {
+		otx2_err("Failed to get pool(0x%"PRIx64") context", aura_id);
+		return rc;
+	}
+
+	pool = &rsp->pool;
+
+	if (lim[aura_id].ptr_start != pool->ptr_start ||
+		lim[aura_id].ptr_end != pool->ptr_end) {
+		otx2_err("Range update failed on pool(0x%"PRIx64")", aura_id);
+		return -ERANGE;
+	}
+
+	return 0;
+}
+
+static int
 otx2_npa_alloc(struct rte_mempool *mp)
 {
 	uint32_t block_size, block_count;
@@ -724,6 +758,9 @@  otx2_npa_populate(struct rte_mempool *mp, unsigned int max_objs, void *vaddr,
 
 	npa_lf_aura_op_range_set(mp->pool_id, iova, iova + len);
 
+	if (npa_lf_aura_range_update_check(mp->pool_id) < 0)
+		return -EBUSY;
+
 	return rte_mempool_op_populate_default(mp, max_objs, vaddr, iova, len,
 					       obj_cb, obj_cb_arg);
 }