[dpdk-dev] [PATCH 2/2] af_xdp: avoid to unnecessary allocation and free mbuf

Li,Rongqing lirongqing at baidu.com
Sun Sep 20 08:02:07 CEST 2020



> -----Original Message-----
> From: Loftus, Ciara [mailto:ciara.loftus at intel.com]
> Sent: Friday, September 18, 2020 5:39 PM
> To: Li,Rongqing <lirongqing at baidu.com>
> Cc: dev at dpdk.org
> Subject: RE: [dpdk-dev] [PATCH 2/2] af_xdp: avoid to unnecessary allocation and
> free mbuf
> 
> >
> > optimize rx performance, by allocating mbuf based on result of
> > xsk_ring_cons__peek, to avoid to redundancy allocation, and free mbuf
> > when receive packets
> >
> > Signed-off-by: Li RongQing <lirongqing at baidu.com>
> > Signed-off-by: Dongsheng Rong <rongdongsheng at baidu.com>
> > ---
> >  drivers/net/af_xdp/rte_eth_af_xdp.c | 64
> > ++++++++++++++++---------------
> > ------
> >  1 file changed, 27 insertions(+), 37 deletions(-)
> >
> > diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c
> > b/drivers/net/af_xdp/rte_eth_af_xdp.c
> > index 7ce4ad04a..48824050e 100644
> > --- a/drivers/net/af_xdp/rte_eth_af_xdp.c
> > +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
> > @@ -229,28 +229,29 @@ af_xdp_rx_zc(void *queue, struct rte_mbuf
> > **bufs, uint16_t nb_pkts)
> >  	struct xsk_umem_info *umem = rxq->umem;
> >  	uint32_t idx_rx = 0;
> >  	unsigned long rx_bytes = 0;
> > -	int rcvd, i;
> > +	int i;
> >  	struct rte_mbuf *fq_bufs[ETH_AF_XDP_RX_BATCH_SIZE];
> >
> > -	/* allocate bufs for fill queue replenishment after rx */
> > -	if (rte_pktmbuf_alloc_bulk(umem->mb_pool, fq_bufs, nb_pkts)) {
> > -		AF_XDP_LOG(DEBUG,
> > -			"Failed to get enough buffers for fq.\n");
> > -		return 0;
> > -	}
> >
> > -	rcvd = xsk_ring_cons__peek(rx, nb_pkts, &idx_rx);
> > +	nb_pkts = xsk_ring_cons__peek(rx, nb_pkts, &idx_rx);
> >
> > -	if (rcvd == 0) {
> > +	if (nb_pkts == 0) {
> >  #if defined(XDP_USE_NEED_WAKEUP)
> >  		if (xsk_ring_prod__needs_wakeup(&umem->fq))
> >  			(void)poll(rxq->fds, 1, 1000);
> >  #endif
> >
> > -		goto out;
> > +		return 0;
> >  	}
> >
> > -	for (i = 0; i < rcvd; i++) {
> > +	/* allocate bufs for fill queue replenishment after rx */
> > +	if (rte_pktmbuf_alloc_bulk(umem->mb_pool, fq_bufs, nb_pkts)) {
> > +		AF_XDP_LOG(DEBUG,
> > +			"Failed to get enough buffers for fq.\n");
> 
> Thanks for this patch. I've considered this in the past.
> There is a problem if we hit this condition.
> We advance the rx producer @ xsk_ring_cons__peek.
> But if we have no mbufs to hold the rx data, it is lost.
> That's why we allocate the mbufs up front now.
> Agree that we might have wasteful allocations and it's not the most optimal,
> but we don't drop packets due to failed mbuf allocs.
> 
> > +		return 0;

xsk_ring_cons__peek advance rx cached_cons and rx cached_prod, I think it is harmless to advance cached_prod,
so we can restore rx cached_cons if mbuf fail to be allocated, to fix, like:


    if (unlikely(rte_pktmbuf_alloc_bulk(rxq->mb_pool, mbufs, nb_pkts) != 0)) {
        rx->cached_cons -= rcvd;
        return 0;
     }

Is it right?

-Li


More information about the dev mailing list