[PATCH] net/mlx5: zero encap UDP csum for IPv4 too
Suanming Mou
suanmingm at nvidia.com
Mon Nov 13 09:01:16 CET 2023
Hi,
> -----Original Message-----
> From: Bing Zhao <bingz at nvidia.com>
> Sent: Monday, November 13, 2023 3:30 PM
> To: Matan Azrad <matan at nvidia.com>; Slava Ovsiienko
> <viacheslavo at nvidia.com>; Raslan Darawsheh <rasland at nvidia.com>; Suanming
> Mou <suanmingm at nvidia.com>; Ori Kam <orika at nvidia.com>
> Cc: dev at dpdk.org; Eli Britstein <elibr at nvidia.com>; stable at dpdk.org
> Subject: [PATCH] net/mlx5: zero encap UDP csum for IPv4 too
>
> From: Eli Britstein <elibr at nvidia.com>
>
> A zero UDP csum indicates it should not be validated by the receiver.
> The HW may not calculate UDP csum after encap.
>
> The cited commit made sure the UDP csum is zero for UDP over IPv6, mistakenly
> not handling UDP over IPv4. Fix it.
>
> Fixes: bf1d7d9a033a ("net/mlx5: zero out UDP checksum in encapsulation")
> Cc: stable at dpdk.org
>
> Signed-off-by: Eli Britstein <elibr at nvidia.com>
Acked-by: Suanming Mou <suanmingm at nvidia.com>
> ---
> drivers/net/mlx5/mlx5_flow_dv.c | 26 +++++++++++++++-----------
> 1 file changed, 15 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
> index 9753af2cb1..115d730317 100644
> --- a/drivers/net/mlx5/mlx5_flow_dv.c
> +++ b/drivers/net/mlx5/mlx5_flow_dv.c
> @@ -4713,6 +4713,7 @@ flow_dv_zero_encap_udp_csum(void *data, struct
> rte_flow_error *error) {
> struct rte_ether_hdr *eth = NULL;
> struct rte_vlan_hdr *vlan = NULL;
> + struct rte_ipv4_hdr *ipv4 = NULL;
> struct rte_ipv6_hdr *ipv6 = NULL;
> struct rte_udp_hdr *udp = NULL;
> char *next_hdr;
> @@ -4729,24 +4730,27 @@ flow_dv_zero_encap_udp_csum(void *data, struct
> rte_flow_error *error)
> next_hdr += sizeof(struct rte_vlan_hdr);
> }
>
> - /* HW calculates IPv4 csum. no need to proceed */
> - if (proto == RTE_ETHER_TYPE_IPV4)
> - return 0;
> -
> /* non IPv4/IPv6 header. not supported */
> - if (proto != RTE_ETHER_TYPE_IPV6) {
> + if (proto != RTE_ETHER_TYPE_IPV4 && proto != RTE_ETHER_TYPE_IPV6)
> {
> return rte_flow_error_set(error, ENOTSUP,
> RTE_FLOW_ERROR_TYPE_ACTION,
> NULL, "Cannot offload non
> IPv4/IPv6");
> }
>
> - ipv6 = (struct rte_ipv6_hdr *)next_hdr;
> -
> - /* ignore non UDP */
> - if (ipv6->proto != IPPROTO_UDP)
> - return 0;
> + if (proto == RTE_ETHER_TYPE_IPV4) {
> + ipv4 = (struct rte_ipv4_hdr *)next_hdr;
> + /* ignore non UDP */
> + if (ipv4->next_proto_id != IPPROTO_UDP)
> + return 0;
> + udp = (struct rte_udp_hdr *)(ipv4 + 1);
> + } else {
> + ipv6 = (struct rte_ipv6_hdr *)next_hdr;
> + /* ignore non UDP */
> + if (ipv6->proto != IPPROTO_UDP)
> + return 0;
> + udp = (struct rte_udp_hdr *)(ipv6 + 1);
> + }
>
> - udp = (struct rte_udp_hdr *)(ipv6 + 1);
> udp->dgram_cksum = 0;
>
> return 0;
> --
> 2.34.1
More information about the stable
mailing list