[dpdk-stable] [EXT] Re: [dpdk-dev] [PATCH 1/3] net/bnx2x: fix to use required mem barriers in Rx path

Rasesh Mody rmody at marvell.com
Sun Jan 26 23:55:27 CET 2020


HI Jerin,

>From: Rasesh Mody <rmody at marvell.com>
>Sent: Tuesday, January 14, 2020 5:57 PM
>
>Hi Jerin,
>
>>From: Jerin Jacob <jerinjacobk at gmail.com>
>>Sent: Tuesday, January 14, 2020 5:49 AM
>>To: Rasesh Mody <rmody at marvell.com>; Gavin Hu <gavin.hu at arm.com>
>>Cc: dpdk-dev <dev at dpdk.org>; Jerin Jacob Kollanukkaran
>><jerinj at marvell.com>; Ferruh Yigit <ferruh.yigit at intel.com>;
>>GR-Everest- DPDK-Dev <GR-Everest-DPDK-Dev at marvell.com>; dpdk stable
>><stable at dpdk.org>
>>Subject: [EXT] Re: [dpdk-dev] [PATCH 1/3] net/bnx2x: fix to use
>>required mem barriers in Rx path
>>
>>External Email
>>
>>----------------------------------------------------------------------
>>+ Gavin
>>
>>On Tue, Jan 14, 2020 at 6:09 AM Rasesh Mody <rmody at marvell.com> wrote:
>>>
>>> When handling RX completion queue PMD is not using required
>>> read/write barriers before reading completion queue element (CQE)
>>> indices, updating/writing hardware consumer and producer.
>>> This patch adds appropriate read/write memory barriers in places
>>> which are required by driver and adapter to read or update indices.
>>>
>>> Fixes: 540a211084a7 ("bnx2x: driver core")
>>> Cc: stable at dpdk.org
>>>
>>> Signed-off-by: Rasesh Mody <rmody at marvell.com>
>>> ---
>>>  drivers/net/bnx2x/bnx2x.c      |  5 +++++
>>>  drivers/net/bnx2x/bnx2x_rxtx.c | 22 ++++++++++++++++++++++
>>>  2 files changed, 27 insertions(+)
>>>
>>> diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
>>> index ed31335ac..9c5e7995d 100644
>>> --- a/drivers/net/bnx2x/bnx2x.c
>>> +++ b/drivers/net/bnx2x/bnx2x.c
>>> @@ -1255,6 +1255,11 @@ static uint8_t bnx2x_rxeof(struct bnx2x_softc
>>*sc, struct bnx2x_fastpath *fp)
>>>                 return 0;
>>>         }
>>>
>>> +       /* Add memory barrier as status block fields can change. This memory
>>> +        * barrier will flush out all the read/write operations to status block
>>> +        * generated before the barrier. It will ensure stale data is not read
>>> +        */
>>> +       mb();
>>
>># Do you need full barriers here?
>
>Yes
>
>># Which architecture did you saw this issue?
>># rte_cio_* barriers are performance Friday, Have you checked
>>rte_cio_* would suffice the requirements.
>>See the discussion in  https://urldefense.proofpoint.com/v2/url?u=http-
>>3A__patches.dpdk.org_patch_64038_&d=DwIBaQ&c=nKjWec2b6R0mOyPaz
>7
>>xtfQ&r=9aB46H7c7TYTnBun6ODgtnNLQdw3jNiVKHbs9eOyBSU&m=lmdOnuA
>p
>>3MUDqX100Z4E2BDgaSSAy9oBgksySHVfEBI&s=oNq78smfHMB5fUZW_ew0_
>p
>>e6gp_5C0MTw0TSPPWR8qQ&e=
>
>This patch is to prevent potential issue which can happen in absence of
>required barriers.
>The above barrier is in slow path. However, there is another full barrier in fast
>path as part of this patch which can use rte_cio_* .
>I'll revise the patch and resend.

Note that memory barrier changes are a preventive fix. We did not observe any issue with the performance with original changes.
If we change read/write fast path barriers to rte_cio_*, PMD will end up having mixed barriers for slow path and fast path.
We'll revisit these changes later. I have sent v2 series without memory barrier changes.

Thanks!
-Rasesh
>
>>
>>I assume 2/3 and 3/3 patches are for the slow path. if so, it is fine
>>to use full barriers on those patches.
>
>The 2/3 is slow path fix and 3/3 is fixing a race condition that can occur
>between slow path and fast path.
>
>Thanks!
>-Rasesh
>




More information about the stable mailing list