[PATCH 1/1] net/mlx5: fix inline data length for multisegment packets

Raslan Darawsheh rasland at nvidia.com
Sun Nov 12 15:41:59 CET 2023


Hi,

> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo at nvidia.com>
> Sent: Friday, November 10, 2023 11:50 AM
> To: dev at dpdk.org
> Cc: Raslan Darawsheh <rasland at nvidia.com>; Matan Azrad
> <matan at nvidia.com>; Suanming Mou <suanmingm at nvidia.com>;
> stable at dpdk.org
> Subject: [PATCH 1/1] net/mlx5: fix inline data length for multisegment packets
> 
> If packet data length exceeds the configured limit for packet
> to be inlined in the queue descriptor the driver checks if hardware
> requires to do minimal data inline or the VLAN insertion offload is
> requested and not supported in hardware (that means we have to do VLAN
> insertion in software with inline data). Then driver scans the mbuf
> chain to find the minimal segment amount to satisfy the data needed
> for minimal inline.
> 
> There was incorrect first segment inline data length calculation
> with missing VLAN header being inserted, that could lead to the
> segmentation fault in the mbuf chain scanning, for example for
> the packets:
> 
>   packet:
>     mbuf0 pkt_len = 288, data_len = 156
>     mbuf1 pkt_len = 132, data_len = 132
> 
>   txq->inlen_send = 290
> 
> The driver was trying to reach the inlen_send inline data length
> with missing VLAN header length added and was running out of the
> mbuf chain (there were just not enough data in the packet to satisfy
> the criteria).
> 
> Fixes: 18a1c20044c0 ("net/mlx5: implement Tx burst template")
> Fixes: ec837ad0fc7c ("net/mlx5: fix multi-segment inline for the first
> segments")
> Cc: stable at dpdk.org
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo at nvidia.com>
> Acked-by: Suanming Mou <suanmingm at nvidia.com>

Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh


More information about the dev mailing list