[PATCH v3] net/ixgbe: add proper memory barriers for some Rx functions

Zhang, Qi Z qi.z.zhang at intel.com
Mon May 15 04:10:01 CEST 2023



> -----Original Message-----
> From: Ruifeng Wang <Ruifeng.Wang at arm.com>
> Sent: Monday, May 8, 2023 2:03 PM
> To: Min Zhou <zhoumin at loongson.cn>; Zhang, Qi Z <qi.z.zhang at intel.com>;
> mb at smartsharesystems.com; konstantin.v.ananyev at yandex.ru; Yang,
> Qiming <qiming.yang at intel.com>; Wu, Wenjun1 <wenjun1.wu at intel.com>
> Cc: drc at linux.vnet.ibm.com; roretzla at linux.microsoft.com; dev at dpdk.org;
> stable at dpdk.org; maobibo at loongson.cn; nd <nd at arm.com>
> Subject: RE: [PATCH v3] net/ixgbe: add proper memory barriers for some Rx
> functions
> 
> > -----Original Message-----
> > From: Min Zhou <zhoumin at loongson.cn>
> > Sent: Saturday, May 6, 2023 6:24 PM
> > To: qi.z.zhang at intel.com; mb at smartsharesystems.com;
> > konstantin.v.ananyev at yandex.ru; qiming.yang at intel.com;
> > wenjun1.wu at intel.com; zhoumin at loongson.cn
> > Cc: Ruifeng Wang <Ruifeng.Wang at arm.com>; drc at linux.vnet.ibm.com;
> > roretzla at linux.microsoft.com; dev at dpdk.org; stable at dpdk.org;
> > maobibo at loongson.cn
> > Subject: [PATCH v3] net/ixgbe: add proper memory barriers for some Rx
> > functions
> >
> > Segmentation fault has been observed while running the
> > ixgbe_recv_pkts_lro() function to receive packets on the Loongson
> > 3C5000 processor which has 64 cores and 4 NUMA nodes.
> >
> > From the ixgbe_recv_pkts_lro() function, we found that as long as the
> > first packet has the EOP bit set, and the length of this packet is
> > less than or equal to rxq->crc_len, the segmentation fault will
> > definitely happen even though on the other platforms. For example, if
> > we made the first packet which had the EOP bit set had a zero length by
> force, the segmentation fault would happen on X86.
> >
> > Because when processd the first packet the first_seg->next will be
> > NULL, if at the same time this packet has the EOP bit set and its
> > length is less than or equal to rxq->crc_len, the following loop will be
> executed:
> >
> >     for (lp = first_seg; lp->next != rxm; lp = lp->next)
> >         ;
> >
> > We know that the first_seg->next will be NULL under this condition. So
> > the expression of
> > lp->next->next will cause the segmentation fault.
> >
> > Normally, the length of the first packet with EOP bit set will be
> > greater than rxq-
> > >crc_len. However, the out-of-order execution of CPU may make the read
> > >ordering of the
> > status and the rest of the descriptor fields in this function not be
> > correct. The related codes are as following:
> >
> >         rxdp = &rx_ring[rx_id];
> >  #1     staterr = rte_le_to_cpu_32(rxdp->wb.upper.status_error);
> >
> >         if (!(staterr & IXGBE_RXDADV_STAT_DD))
> >             break;
> >
> >  #2     rxd = *rxdp;
> >
> > The sentence #2 may be executed before sentence #1. This action is
> > likely to make the ready packet zero length. If the packet is the
> > first packet and has the EOP bit set, the above segmentation fault will
> happen.
> >
> > So, we should add a proper memory barrier to ensure the read ordering
> > be correct. We also did the same thing in the ixgbe_recv_pkts()
> > function to make the rxd data be valid even though we did not find
> segmentation fault in this function.
> >
> > Fixes: 8eecb3295ae ("ixgbe: add LRO support")
> > Cc: stable at dpdk.org
> >
> > Signed-off-by: Min Zhou <zhoumin at loongson.cn>
> > ---
> > v3:
> > - Use rte_smp_rmb() as the proper memory barrier instead of rte_rmb()
> > ---
> > v2:
> > - Make the calling of rte_rmb() for all platforms
> > ---
> >  drivers/net/ixgbe/ixgbe_rxtx.c | 39
> > ++++++++++++----------------------
> >  1 file changed, 13 insertions(+), 26 deletions(-)
> >
> > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c
> > b/drivers/net/ixgbe/ixgbe_rxtx.c index
> > 6b3d3a4d1a..80bcaef093 100644
> > --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> > +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> > @@ -1823,6 +1823,12 @@ ixgbe_recv_pkts(void *rx_queue, struct
> rte_mbuf **rx_pkts,
> >  		staterr = rxdp->wb.upper.status_error;
> >  		if (!(staterr & rte_cpu_to_le_32(IXGBE_RXDADV_STAT_DD)))
> >  			break;
> > +
> > +		/*
> > +		 * This barrier is to ensure that status_error which includes
> DD
> > +		 * bit is loaded before loading of other descriptor words.
> > +		 */
> > +		rte_smp_rmb();
> >  		rxd = *rxdp;
> >
> >  		/*
> > @@ -2089,32 +2095,8 @@ ixgbe_recv_pkts_lro(void *rx_queue, struct
> > rte_mbuf **rx_pkts, uint16_t nb_pkts,
> >
> >  next_desc:
> >  		/*
> > -		 * The code in this whole file uses the volatile pointer to
> > -		 * ensure the read ordering of the status and the rest of the
> > -		 * descriptor fields (on the compiler level only!!!). This is so
> > -		 * UGLY - why not to just use the compiler barrier instead?
> DPDK
> > -		 * even has the rte_compiler_barrier() for that.
> > -		 *
> > -		 * But most importantly this is just wrong because this
> doesn't
> > -		 * ensure memory ordering in a general case at all. For
> > -		 * instance, DPDK is supposed to work on Power CPUs where
> > -		 * compiler barrier may just not be enough!
> > -		 *
> > -		 * I tried to write only this function properly to have a
> > -		 * starting point (as a part of an LRO/RSC series) but the
> > -		 * compiler cursed at me when I tried to cast away the
> > -		 * "volatile" from rx_ring (yes, it's volatile too!!!). So, I'm
> > -		 * keeping it the way it is for now.
> > -		 *
> > -		 * The code in this file is broken in so many other places and
> > -		 * will just not work on a big endian CPU anyway therefore
> the
> > -		 * lines below will have to be revisited together with the rest
> > -		 * of the ixgbe PMD.
> > -		 *
> > -		 * TODO:
> > -		 *    - Get rid of "volatile" and let the compiler do its job.
> > -		 *    - Use the proper memory barrier (rte_rmb()) to ensure
> the
> > -		 *      memory ordering below.
> > +		 * It is necessary to use a proper memory barrier to ensure
> the
> > +		 * memory ordering below.
> >  		 */
> >  		rxdp = &rx_ring[rx_id];
> >  		staterr = rte_le_to_cpu_32(rxdp->wb.upper.status_error);
> > @@ -2122,6 +2104,11 @@ ixgbe_recv_pkts_lro(void *rx_queue, struct
> > rte_mbuf **rx_pkts, uint16_t nb_pkts,
> >  		if (!(staterr & IXGBE_RXDADV_STAT_DD))
> >  			break;
> >
> > +		/*
> > +		 * This barrier is to ensure that status_error which includes
> DD
> > +		 * bit is loaded before loading of other descriptor words.
> > +		 */
> > +		rte_smp_rmb();
> >  		rxd = *rxdp;
> >
> >  		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u "
> > --
> > 2.31.1
> Reviewed-by: Ruifeng Wang <ruifeng.wang at arm.com>

Applied to dpdk-next-net-intel.

Thanks
Qi


More information about the stable mailing list