[dpdk-dev] net/thunderx: fix deadlock in Rx path

Message ID 20170501184155.27305-1-jerin.jacob@caviumnetworks.com (mailing list archive)
State Accepted, archived
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Jerin Jacob May 1, 2017, 6:41 p.m. UTC
  RBDR buffers are refilled when SW consumes the buffers from CQ.
This creates deadlock case when CQ buffers exhausted due to lack
of RBDR buffers. Fix is to refill the RBDR when rx_free_thresh
meet, irrespective of the number of CQ buffers consumed.

Fixes: e2d7fc9f0a24 ("net/thunderx: add single and multi-segment Rx")
Cc: stable@dpdk.org

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 drivers/net/thunderx/nicvf_rxtx.c | 18 ++++++++----------
 1 file changed, 8 insertions(+), 10 deletions(-)
  

Comments

Thomas Monjalon May 1, 2017, 8:31 p.m. UTC | #1
01/05/2017 20:41, Jerin Jacob:
> RBDR buffers are refilled when SW consumes the buffers from CQ.
> This creates deadlock case when CQ buffers exhausted due to lack
> of RBDR buffers. Fix is to refill the RBDR when rx_free_thresh
> meet, irrespective of the number of CQ buffers consumed.
> 
> Fixes: e2d7fc9f0a24 ("net/thunderx: add single and multi-segment Rx")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

Applied, thanks
  

Patch

diff --git a/drivers/net/thunderx/nicvf_rxtx.c b/drivers/net/thunderx/nicvf_rxtx.c
index 003ab0693..6cae8341b 100644
--- a/drivers/net/thunderx/nicvf_rxtx.c
+++ b/drivers/net/thunderx/nicvf_rxtx.c
@@ -464,11 +464,10 @@  nicvf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		rxq->head = cqe_head;
 		nicvf_addr_write(rxq->cq_door, to_process);
 		rxq->recv_buffers += to_process;
-		if (rxq->recv_buffers > rxq->rx_free_thresh) {
-			rxq->recv_buffers -= nicvf_fill_rbdr(rxq,
-						rxq->rx_free_thresh);
-			NICVF_RX_ASSERT(rxq->recv_buffers >= 0);
-		}
+	}
+	if (rxq->recv_buffers > rxq->rx_free_thresh) {
+		rxq->recv_buffers -= nicvf_fill_rbdr(rxq, rxq->rx_free_thresh);
+		NICVF_RX_ASSERT(rxq->recv_buffers >= 0);
 	}
 
 	return to_process;
@@ -555,11 +554,10 @@  nicvf_recv_pkts_multiseg(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxq->head = cqe_head;
 		nicvf_addr_write(rxq->cq_door, to_process);
 		rxq->recv_buffers += buffers_consumed;
-		if (rxq->recv_buffers > rxq->rx_free_thresh) {
-			rxq->recv_buffers -=
-				nicvf_fill_rbdr(rxq, rxq->rx_free_thresh);
-			NICVF_RX_ASSERT(rxq->recv_buffers >= 0);
-		}
+	}
+	if (rxq->recv_buffers > rxq->rx_free_thresh) {
+		rxq->recv_buffers -= nicvf_fill_rbdr(rxq, rxq->rx_free_thresh);
+		NICVF_RX_ASSERT(rxq->recv_buffers >= 0);
 	}
 
 	return to_process;