patch 'net/txgbe: add proper memory barriers in Rx' has been queued to stable release 22.11.4

Xueming Li xuemingl at nvidia.com
Mon Dec 11 11:10:56 CET 2023


Hi,

FYI, your patch has been queued to stable release 22.11.4

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/13/23. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=22.11-staging

This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=22.11-staging&id=33f8a0ce2cb240f08f1c160ad1712999c7a3a298

Thanks.

Xueming Li <xuemingl at nvidia.com>

---
>From 33f8a0ce2cb240f08f1c160ad1712999c7a3a298 Mon Sep 17 00:00:00 2001
From: Jiawen Wu <jiawenwu at trustnetic.com>
Date: Wed, 1 Nov 2023 11:32:40 +0800
Subject: [PATCH] net/txgbe: add proper memory barriers in Rx
Cc: Xueming Li <xuemingl at nvidia.com>

[ upstream commit 5bf954b7d91ad20ee87befbad9fdb53f03dd488b ]

Refer to commit 85e46c532bc7 ("net/ixgbe: add proper memory barriers in
Rx"). Fix the same issue as ixgbe.

Segmentation fault has been observed while running the
txgbe_recv_pkts_lro() function to receive packets on the Loongson 3A5000
processor. It's caused by the out-of-order execution of CPU. So add a
proper memory barrier to ensure the read ordering be correct.

We also did the same thing in the txgbe_recv_pkts() function to make the
rxd data be valid even though we did not find segmentation fault in this
function.

Fixes: 0e484278c85f ("net/txgbe: support Rx")

Signed-off-by: Jiawen Wu <jiawenwu at trustnetic.com>
---
 drivers/net/txgbe/txgbe_rxtx.c | 47 +++++++++++++++-------------------
 1 file changed, 21 insertions(+), 26 deletions(-)

diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 834ada886a..24fc34d3c4 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1476,11 +1476,22 @@ txgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		 * of accesses cannot be reordered by the compiler. If they were
 		 * not volatile, they could be reordered which could lead to
 		 * using invalid descriptor fields when read from rxd.
+		 *
+		 * Meanwhile, to prevent the CPU from executing out of order, we
+		 * need to use a proper memory barrier to ensure the memory
+		 * ordering below.
 		 */
 		rxdp = &rx_ring[rx_id];
 		staterr = rxdp->qw1.lo.status;
 		if (!(staterr & rte_cpu_to_le_32(TXGBE_RXD_STAT_DD)))
 			break;
+
+		/*
+		 * Use acquire fence to ensure that status_error which includes
+		 * DD bit is loaded before loading of other descriptor words.
+		 */
+		rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+
 		rxd = *rxdp;
 
 		/*
@@ -1726,32 +1737,10 @@ txgbe_recv_pkts_lro(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts,
 
 next_desc:
 		/*
-		 * The code in this whole file uses the volatile pointer to
-		 * ensure the read ordering of the status and the rest of the
-		 * descriptor fields (on the compiler level only!!!). This is so
-		 * UGLY - why not to just use the compiler barrier instead? DPDK
-		 * even has the rte_compiler_barrier() for that.
-		 *
-		 * But most importantly this is just wrong because this doesn't
-		 * ensure memory ordering in a general case at all. For
-		 * instance, DPDK is supposed to work on Power CPUs where
-		 * compiler barrier may just not be enough!
-		 *
-		 * I tried to write only this function properly to have a
-		 * starting point (as a part of an LRO/RSC series) but the
-		 * compiler cursed at me when I tried to cast away the
-		 * "volatile" from rx_ring (yes, it's volatile too!!!). So, I'm
-		 * keeping it the way it is for now.
-		 *
-		 * The code in this file is broken in so many other places and
-		 * will just not work on a big endian CPU anyway therefore the
-		 * lines below will have to be revisited together with the rest
-		 * of the txgbe PMD.
-		 *
-		 * TODO:
-		 *    - Get rid of "volatile" and let the compiler do its job.
-		 *    - Use the proper memory barrier (rte_rmb()) to ensure the
-		 *      memory ordering below.
+		 * "Volatile" only prevents caching of the variable marked
+		 * volatile. Most important, "volatile" cannot prevent the CPU
+		 * from executing out of order. So, it is necessary to use a
+		 * proper memory barrier to ensure the memory ordering below.
 		 */
 		rxdp = &rx_ring[rx_id];
 		staterr = rte_le_to_cpu_32(rxdp->qw1.lo.status);
@@ -1759,6 +1748,12 @@ next_desc:
 		if (!(staterr & TXGBE_RXD_STAT_DD))
 			break;
 
+		/*
+		 * Use acquire fence to ensure that status_error which includes
+		 * DD bit is loaded before loading of other descriptor words.
+		 */
+		rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+
 		rxd = *rxdp;
 
 		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u "
-- 
2.25.1

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2023-12-11 17:56:24.187521000 +0800
+++ 0031-net-txgbe-add-proper-memory-barriers-in-Rx.patch	2023-12-11 17:56:22.937652300 +0800
@@ -1 +1 @@
-From 5bf954b7d91ad20ee87befbad9fdb53f03dd488b Mon Sep 17 00:00:00 2001
+From 33f8a0ce2cb240f08f1c160ad1712999c7a3a298 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl at nvidia.com>
+
+[ upstream commit 5bf954b7d91ad20ee87befbad9fdb53f03dd488b ]
@@ -19 +21,0 @@
-Cc: stable at dpdk.org
@@ -23,2 +25,2 @@
- drivers/net/txgbe/txgbe_rxtx.c | 49 +++++++++++++++-------------------
- 1 file changed, 22 insertions(+), 27 deletions(-)
+ drivers/net/txgbe/txgbe_rxtx.c | 47 +++++++++++++++-------------------
+ 1 file changed, 21 insertions(+), 26 deletions(-)
@@ -27 +29 @@
-index 834ada886a..1cd4b25965 100644
+index 834ada886a..24fc34d3c4 100644
@@ -30,9 +31,0 @@
-@@ -1226,7 +1226,7 @@ txgbe_rx_scan_hw_ring(struct txgbe_rx_queue *rxq)
- 		for (j = 0; j < LOOK_AHEAD; j++)
- 			s[j] = rte_le_to_cpu_32(rxdp[j].qw1.lo.status);
- 
--		rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
-+		rte_atomic_thread_fence(rte_memory_order_acquire);
- 
- 		/* Compute how many status bits were set */
- 		for (nb_dd = 0; nb_dd < LOOK_AHEAD &&
@@ -57 +50 @@
-+		rte_atomic_thread_fence(rte_memory_order_acquire);
++		rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
@@ -107 +100 @@
-+		rte_atomic_thread_fence(rte_memory_order_acquire);
++		rte_atomic_thread_fence(__ATOMIC_ACQUIRE);


More information about the stable mailing list