[dpdk-stable] [dpdk-dev] [PATCH] net/mlx5: fix the eCPRI common header endianness

Ferruh Yigit ferruh.yigit at intel.com
Fri Nov 6 18:41:28 CET 2020


On 11/6/2020 2:10 PM, Bing Zhao wrote:
> Hi Ferruh,
> 
> Thanks for your review and comments, PSB.
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit at intel.com>
>> Sent: Friday, November 6, 2020 7:20 PM
>> To: Bing Zhao <bingz at nvidia.com>; Slava Ovsiienko
>> <viacheslavo at nvidia.com>; Matan Azrad <matan at nvidia.com>
>> Cc: dev at dpdk.org; Ori Kam <orika at nvidia.com>; Raslan Darawsheh
>> <rasland at nvidia.com>; stable at dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH] net/mlx5: fix the eCPRI common
>> header endianness
>>
>> External email: Use caution opening links or attachments
>>
>>
>> On 11/3/2020 5:41 AM, Bing Zhao wrote:
>>> The input header of a RTE flow item is with network byte order. In
>> the
>>> host with little endian, the bit field order are the same as the
>> byte
>>> order.
>>> When checking the an eCPRI message type, the wrong field will be
>>> selected. Right now, since the whole u32 is being checked and for
>> all
>>> types, the following implementation is unique. There is no
>> functional
>>> risk but it is still an error to fix.
>>   >
>>
>> Isn't the 'ecpri_v' filled by application (CPU), why there is an
>> assumption that it is big endian?
>>
> 
> Yes, this is filled by the application SW. I checked the current code of other headers and current implementation.
> 1. In the testpmd flow example, all the headers of network stack like IPv4 are translated into to be in BE, "ARGS_ENTRY_HTON"
> 2. All fields are with "rte_be*_t"type, even though only a typedef, it should be considered in BE.
> 3. RTE flow will just pass the flow items pointer to the PMD drivers, so in the PMD part, the headers should be treated as in BE.
> So, to my understanding, this is not an assumption but some constraint.
> Correct me if my understanding is wrong.

I missed in 'cmdline_flow.c' big endian conversion done via 'arg->hton' check, 
so yes PMD getting big endian values makes sense, thanks for clarification.

> 
>> And even if it is big endian, it should be broken previously right?
>> Since it was checking wrong field as 'type' as you said, why there
>> were no functional risk?
> 
> In the PMD driver, the first u32 *value containing this type is already used for matching. And a checking is used for the following "sub-headers"
> matching. Indeed, the first u32 *mask is used to confirm if the following sub-header need to be matched.
> Since different types will lead to different variants of the packets, the switch-of-type is used.
> But all the 3 types supported in PMD now almost have the same results (part of the second u32, remaining will be all 0s).
> So even if the type of the value is always "0" by mistake, the cases are the same and it still works by coincidence.

if 'type' is '0' I see it works, but what if the 'type' is something other than 
values covered in 'switch', than it may fail matching 'sub-headers' and I guess 
that can happen based on value of 'size' field.

Anyway, since you are already fixing it, will you be OK if I drop last two 
sentences from the commit log and proceed?


More information about the stable mailing list