[dpdk-stable] patch 'net/memif: relax load of ring tail for M2S ring' has been queued to stable release 19.11.6

luca.boccassi at gmail.com luca.boccassi at gmail.com
Wed Oct 28 11:45:28 CET 2020


Hi,

FYI, your patch has been queued to stable release 19.11.6

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 10/30/20. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Thanks.

Luca Boccassi

---
>From 863291558d1067b389c0c97827f46c8c385f94a5 Mon Sep 17 00:00:00 2001
From: Honnappa Nagarahalli <honnappa.nagarahalli at arm.com>
Date: Mon, 28 Sep 2020 14:03:28 -0500
Subject: [PATCH] net/memif: relax load of ring tail for M2S ring

[ upstream commit 827660278032e94541aa8f9363aa16afe6ad0964 ]

For M2S rings, ring->tail is updated by the sender and eth_memif_tx
function is called in the context of sending thread. The loads in
the sender do not need to synchronize with its own stores.

Fixes: a2aafb9aa651 ("net/memif: optimize with one-way barrier")

Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli at arm.com>
Reviewed-by: Phil Yang <phil.yang at arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang at arm.com>
Reviewed-by: Jakub Grajciar <jgrajcia at cisco.com>
---
 drivers/net/memif/rte_eth_memif.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index 1c41988069..3b377bc54c 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -566,7 +566,13 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		n_free = ring_size - slot +
 				__atomic_load_n(&ring->tail, __ATOMIC_ACQUIRE);
 	} else {
-		slot = __atomic_load_n(&ring->tail, __ATOMIC_ACQUIRE);
+		/* For M2S queues ring->tail is updated by the sender and
+		 * this function is called in the context of sending thread.
+		 * The loads in the sender do not need to synchronize with
+		 * its own stores. Hence, the following load can be a
+		 * relaxed load.
+		 */
+		slot = __atomic_load_n(&ring->tail, __ATOMIC_RELAXED);
 		n_free = __atomic_load_n(&ring->head, __ATOMIC_ACQUIRE) - slot;
 	}
 
-- 
2.20.1

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2020-10-28 10:35:16.972530616 +0000
+++ 0169-net-memif-relax-load-of-ring-tail-for-M2S-ring.patch	2020-10-28 10:35:11.776834028 +0000
@@ -1,14 +1,15 @@
-From 827660278032e94541aa8f9363aa16afe6ad0964 Mon Sep 17 00:00:00 2001
+From 863291558d1067b389c0c97827f46c8c385f94a5 Mon Sep 17 00:00:00 2001
 From: Honnappa Nagarahalli <honnappa.nagarahalli at arm.com>
 Date: Mon, 28 Sep 2020 14:03:28 -0500
 Subject: [PATCH] net/memif: relax load of ring tail for M2S ring
 
+[ upstream commit 827660278032e94541aa8f9363aa16afe6ad0964 ]
+
 For M2S rings, ring->tail is updated by the sender and eth_memif_tx
 function is called in the context of sending thread. The loads in
 the sender do not need to synchronize with its own stores.
 
 Fixes: a2aafb9aa651 ("net/memif: optimize with one-way barrier")
-Cc: stable at dpdk.org
 
 Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli at arm.com>
 Reviewed-by: Phil Yang <phil.yang at arm.com>
@@ -19,10 +20,10 @@
  1 file changed, 7 insertions(+), 1 deletion(-)
 
 diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
-index d749b5b16c..b72e24932e 100644
+index 1c41988069..3b377bc54c 100644
 --- a/drivers/net/memif/rte_eth_memif.c
 +++ b/drivers/net/memif/rte_eth_memif.c
-@@ -585,7 +585,13 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
+@@ -566,7 +566,13 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
  		n_free = ring_size - slot +
  				__atomic_load_n(&ring->tail, __ATOMIC_ACQUIRE);
  	} else {


More information about the stable mailing list