patch 'app/testpmd: perform SW IP checksum for GRO/GSO packets' has been queued to stable release 20.11.6

Xueming Li xuemingl at nvidia.com
Tue Jun 21 10:02:14 CEST 2022


Hi,

FYI, your patch has been queued to stable release 20.11.6

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 06/23/22. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Queued patches are on a temporary branch at:
https://github.com/steevenlee/dpdk

This queued commit can be viewed at:
https://github.com/steevenlee/dpdk/commit/9463f695d7c8f3f7d54fb575ecf143bcab3e6a7d

Thanks.

Xueming Li <xuemingl at nvidia.com>

---
>From 9463f695d7c8f3f7d54fb575ecf143bcab3e6a7d Mon Sep 17 00:00:00 2001
From: Wenwu Ma <wenwux.ma at intel.com>
Date: Thu, 12 May 2022 01:07:56 +0000
Subject: [PATCH] app/testpmd: perform SW IP checksum for GRO/GSO packets
Cc: Xueming Li <xuemingl at nvidia.com>

[ upstream commit 1945c64674b2b9ad55af0ef31f8a02ae0b747400 ]

The GRO/GSO library doesn't re-calculate checksums for
merged/fragmented packets. If users want the packets to
have correct IP checksums, they should select HW IP
checksum calculation for the port which the packets are
transmitted to. But if the port doesn't support HW IP
checksum, users may perform a SW IP checksum.

Fixes: b7091f1dcfbc ("app/testpmd: enable the heavyweight mode TCP/IPv4 GRO")
Fixes: 52f38a2055ed ("app/testpmd: enable TCP/IPv4 VxLAN and GRE GSO")

Signed-off-by: Wenwu Ma <wenwux.ma at intel.com>
Reviewed-by: Jiayu Hu <jiayu.hu at intel.com>
Tested-by: Wei Ling <weix.ling at intel.com>
Acked-by: Yuying Zhang <yuying.zhang at intel.com>
---
 app/test-pmd/csumonly.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 243ef3e47a..282e87092f 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -760,6 +760,28 @@ pkt_copy_split(const struct rte_mbuf *pkt)
 	return md[0];
 }
 
+#if defined(RTE_LIB_GRO) || defined(RTE_LIB_GSO)
+/*
+ * Re-calculate IP checksum for merged/fragmented packets.
+ */
+static void
+pkts_ip_csum_recalc(struct rte_mbuf **pkts_burst, const uint16_t nb_pkts, uint64_t tx_offloads)
+{
+	int i;
+	struct rte_ipv4_hdr *ipv4_hdr;
+	for (i = 0; i < nb_pkts; i++) {
+		if ((pkts_burst[i]->ol_flags & PKT_TX_IPV4) &&
+			(tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) == 0) {
+			ipv4_hdr = rte_pktmbuf_mtod_offset(pkts_burst[i],
+						struct rte_ipv4_hdr *,
+						pkts_burst[i]->l2_len);
+			ipv4_hdr->hdr_checksum = 0;
+			ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
+		}
+	}
+}
+#endif
+
 /*
  * Receive a burst of packets, and for each packet:
  *  - parse packet, and try to recognize a supported packet type (1)
@@ -1072,6 +1094,8 @@ tunnel_update:
 				fs->gro_times = 0;
 			}
 		}
+
+		pkts_ip_csum_recalc(pkts_burst, nb_rx, tx_offloads);
 	}
 
 	if (gso_ports[fs->tx_port].enable == 0)
@@ -1101,6 +1125,8 @@ tunnel_update:
 
 		tx_pkts_burst = gso_segments;
 		nb_rx = nb_segments;
+
+		pkts_ip_csum_recalc(tx_pkts_burst, nb_rx, tx_offloads);
 	}
 
 	nb_prep = rte_eth_tx_prepare(fs->tx_port, fs->tx_queue,
-- 
2.35.1

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2022-06-21 15:37:52.379602352 +0800
+++ 0068-app-testpmd-perform-SW-IP-checksum-for-GRO-GSO-packe.patch	2022-06-21 15:37:49.097784798 +0800
@@ -1 +1 @@
-From 1945c64674b2b9ad55af0ef31f8a02ae0b747400 Mon Sep 17 00:00:00 2001
+From 9463f695d7c8f3f7d54fb575ecf143bcab3e6a7d Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl at nvidia.com>
+
+[ upstream commit 1945c64674b2b9ad55af0ef31f8a02ae0b747400 ]
@@ -15 +17,0 @@
-Cc: stable at dpdk.org
@@ -26 +28 @@
-index cdb1920763..05763a71e8 100644
+index 243ef3e47a..282e87092f 100644
@@ -29 +31 @@
-@@ -778,6 +778,28 @@ pkt_copy_split(const struct rte_mbuf *pkt)
+@@ -760,6 +760,28 @@ pkt_copy_split(const struct rte_mbuf *pkt)
@@ -43,2 +45,2 @@
-+		if ((pkts_burst[i]->ol_flags & RTE_MBUF_F_TX_IPV4) &&
-+			(tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) == 0) {
++		if ((pkts_burst[i]->ol_flags & PKT_TX_IPV4) &&
++			(tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) == 0) {
@@ -58 +60 @@
-@@ -1102,6 +1124,8 @@ tunnel_update:
+@@ -1072,6 +1094,8 @@ tunnel_update:
@@ -65 +66,0 @@
- #endif
@@ -67 +68,2 @@
-@@ -1135,6 +1159,8 @@ tunnel_update:
+ 	if (gso_ports[fs->tx_port].enable == 0)
+@@ -1101,6 +1125,8 @@ tunnel_update:
@@ -73,3 +75,3 @@
- 	} else
- #endif
- 		tx_pkts_burst = pkts_burst;
+ 	}
+ 
+ 	nb_prep = rte_eth_tx_prepare(fs->tx_port, fs->tx_queue,


More information about the stable mailing list