[PATCH 5/7] net/mlx5: allow hairpin Rx queue in locked memory

Kenneth Klette Jonassen kenneth at bridgetech.tv
Fri Nov 25 15:06:24 CET 2022


This series adds support for using device-managed MEMIC buffers on
hairpin RQ instead of NIM. Was it ever considered as an alternative
that the UMEM interface be extended to support MEMIC buffers instead?

I'm thinking that could simplify the hairpin-specific firmware bits
being added in this series, e.g. no new HAIRPIN_DATA_BUFFER_LOCK TLV,
and the MEMIC-backed UMEM can be passed to RQ using existing PRM bits.

I'm planning to file a feature request adding MEMIC support to UMEM,
so I'd be interested in knowing if that's somehow not possible. My
current use case is allocating 64 bytes of MEMIC for a collapsed CQE
for something similar to the mlx5 packet send scheduling in DPDK.

Best regards,
Kenneth Jonassen

> On 19 Sep 2022, at 18:37, Dariusz Sosnowski <dsosnowski at nvidia.com> wrote:
> 
> This patch adds a capability to place hairpin Rx queue in locked device
> memory. This capability is equivalent to storing hairpin RQ's data
> buffers in locked internal device memory.
> 
> Hairpin Rx queue creation is extended with requesting that RQ is
> allocated in locked internal device memory. If allocation fails and
> force_memory hairpin configuration is set, then hairpin queue creation
> (and, as a result, device start) fails. If force_memory is unset, then
> PMD will fallback to allocating memory for hairpin RQ in unlocked
> internal device memory.
> 
> To allow such allocation, the user must set HAIRPIN_DATA_BUFFER_LOCK
> flag in FW using mlxconfig tool.
> 
> Signed-off-by: Dariusz Sosnowski <dsosnowski at nvidia.com>
> ---
> doc/guides/platform/mlx5.rst   |  5 ++++
> drivers/net/mlx5/mlx5_devx.c   | 51 ++++++++++++++++++++++++++++------
> drivers/net/mlx5/mlx5_ethdev.c |  2 ++
> 3 files changed, 49 insertions(+), 9 deletions(-)
> 



More information about the dev mailing list