[dpdk-dev] mlx5 vxlan match filter vni endianness

Nélio Laranjeiro nelio.laranjeiro at 6wind.com
Thu Apr 6 16:43:23 CEST 2017


On Wed, Apr 05, 2017 at 08:23:35PM +0000, Legacy, Allain wrote:
> Hi,
> None of the comments in the rte_flow.h file (or the programmers guide)
> specify what endianness should be applied to spec/mask fields.  Based
> on the testing I have done so far using a CX4 device (mlx5 driver)
> fields like VLAN ID and UDP ports are expected in network byte order.
> There seems to be a discrepancy with how VXLAN VNI values are
> expected; at least for this one driver.  Matching only works on VNI if
> it is specified in host byte order.
> 
> /**
> * RTE_FLOW_ITEM_TYPE_VXLAN.
> *
> * Matches a VXLAN header (RFC 7348).
> */
> struct rte_flow_item_vxlan {
>                 uint8_t flags; /**< Normally 0x08 (I flag). */
>                 uint8_t rsvd0[3]; /**< Reserved, normally 0x000000. */
>                 uint8_t vni[3]; /**< VXLAN identifier. */
>                 uint8_t rsvd1; /**< Reserved, normally 0x00. */ };
> 
> 
> I have not done any testing on an i40e device yet, but looking at the
> i40e_flow.c code it looks like that driver expects to receive the VNI
> in network byte order:
> 
>                                 if (vxlan_spec && vxlan_mask && !is_vni_masked) {
>                                                 /* If there's vxlan */
>                                                 rte_memcpy(((uint8_t *)&tenant_id_be + 1),
>                                                                    vxlan_spec->vni, 3);
>                                                 filter->tenant_id = rte_be_to_cpu_32(tenant_id_be);
>                                                 if (!o_eth_spec && !o_eth_mask &&
> 
> 
> Can you confirm whether the mlx5_flow.c behavior is a bug or whether
> my understand is incorrect?

Hi Allain,

RTE flow always provide the fields in network order, some mistakes was
already reported and that was the main reason I proposed to document
such types[1].

Your point is correct, the difference in both PMD come from the fact the
NIC do not have the same endianess.  Intel NIC are little endian whereas
Mellanox are big endian, that's why there is no "byteswap" for the VNI
in Mellanox PMD.

In the doubt I have just tested a flow rule with a VXLAN and VNI value
to redirect it to another queue, it work perfectly for me on mlx5.

You are facing some kind of issue?

Thanks,

[1] http://dpdk.org/ml/archives/dev/2016-November/050060.html

-- 
Nélio Laranjeiro
6WIND


More information about the dev mailing list