[v2] net/mlx5: fix single not inline packet elts storing

Message ID 20220817070425.30121-1-viacheslavo@nvidia.com (mailing list archive)
State Accepted, archived
Delegated to: Raslan Darawsheh
Headers
Series [v2] net/mlx5: fix single not inline packet elts storing |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-aarch64-unit-testing success Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-x86_64-compile-testing success Testing PASS
ci/iol-x86_64-unit-testing success Testing PASS
ci/github-robot: build success github build: passed
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS

Commit Message

Slava Ovsiienko Aug. 17, 2022, 7:04 a.m. UTC
  The mlx5 PMD can inline packet data into transmitting descriptor (WQE)
and free mbuf immediately as data no longer needed, for non-inline
packets the mbuf pointer should be stored in elts array for coming
freeing on send completion. There was an optimization on storing
pointers in batch and there was missed storing mbuf for single
packet if non-inline was explicitly requested by flag.

Fixes: cacb44a09962 ("net/mlx5: add no-inline Tx flag")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
v2: "Fixes tag" added

 drivers/net/mlx5/mlx5_tx.h | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
  

Comments

Raslan Darawsheh Aug. 17, 2022, 8:21 a.m. UTC | #1
Hi,

> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo@nvidia.com>
> Sent: Wednesday, August 17, 2022 10:04 AM
> To: dev@dpdk.org
> Cc: Matan Azrad <matan@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>; stable@dpdk.org
> Subject: [PATCH v2] net/mlx5: fix single not inline packet elts storing
> 
> The mlx5 PMD can inline packet data into transmitting descriptor (WQE)
> and free mbuf immediately as data no longer needed, for non-inline
> packets the mbuf pointer should be stored in elts array for coming
> freeing on send completion. There was an optimization on storing
> pointers in batch and there was missed storing mbuf for single
> packet if non-inline was explicitly requested by flag.
> 
> Fixes: cacb44a09962 ("net/mlx5: add no-inline Tx flag")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
> v2: "Fixes tag" added
> 
Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh
  

Patch

diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index 20776919c2..c65031ed3b 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -3314,7 +3314,9 @@  mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
 			 * if no inlining is configured, this is done
 			 * by calling routine in a batch copy.
 			 */
-			MLX5_ASSERT(!MLX5_TXOFF_CONFIG(INLINE));
+			if (MLX5_TXOFF_CONFIG(INLINE))
+				txq->elts[txq->elts_head++ & txq->elts_m] =
+							loc->mbuf;
 			--loc->elts_free;
 #ifdef MLX5_PMD_SOFT_COUNTERS
 			/* Update sent data bytes counter. */