af_xdp: avoid deadlock due to empty fill queue

Message ID 1600330014-22019-1-git-send-email-lirongqing@baidu.com (mailing list archive)
State Superseded, archived
Delegated to: Ferruh Yigit
Headers
Series af_xdp: avoid deadlock due to empty fill queue |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/iol-testing success Testing PASS
ci/travis-robot success Travis build: passed
ci/Intel-compilation success Compilation OK

Commit Message

Li RongQing Sept. 17, 2020, 8:06 a.m. UTC
  when receive packets, it is possible to fail to reserve
fill queue, since buffer ring is shared between tx and rx,
and maybe not available temporary. at last, both fill
queue and rx queue are empty.

then kernel side will be unable to receive packets due to
empty fill queue, and dpdk will be unable to reserve fill
queue because dpdk has not pakcets to receive, at last
deadlock will happen

so move reserve fill queue before xsk_ring_cons__peek
to fix it

Signed-off-by: Li RongQing <lirongqing@baidu.com>
---
 drivers/net/af_xdp/rte_eth_af_xdp.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)
  

Comments

Loftus, Ciara Sept. 18, 2020, 9:27 a.m. UTC | #1
> when receive packets, it is possible to fail to reserve
> fill queue, since buffer ring is shared between tx and rx,
> and maybe not available temporary. at last, both fill
> queue and rx queue are empty.
> 
> then kernel side will be unable to receive packets due to
> empty fill queue, and dpdk will be unable to reserve fill
> queue because dpdk has not pakcets to receive, at last
> deadlock will happen
> 
> so move reserve fill queue before xsk_ring_cons__peek
> to fix it
> 
> Signed-off-by: Li RongQing <lirongqing@baidu.com>

Thanks for the fix. I tested and saw no significant performance drop.

Minor: the first line of the commit should read "net/af_xdp: ...."

Acked-by: Ciara Loftus <ciara.loftus@intel.com>

CC-ing stable as I think this fix should be considered for inclusion.

Thanks,
Ciara

> ---
>  drivers/net/af_xdp/rte_eth_af_xdp.c | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c
> b/drivers/net/af_xdp/rte_eth_af_xdp.c
> index 7ce4ad04a..2dc9cab27 100644
> --- a/drivers/net/af_xdp/rte_eth_af_xdp.c
> +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
> @@ -304,6 +304,10 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs,
> uint16_t nb_pkts)
>  	uint32_t free_thresh = fq->size >> 1;
>  	struct rte_mbuf *mbufs[ETH_AF_XDP_RX_BATCH_SIZE];
> 
> +	if (xsk_prod_nb_free(fq, free_thresh) >= free_thresh)
> +		(void)reserve_fill_queue(umem,
> ETH_AF_XDP_RX_BATCH_SIZE, NULL);
> +
> +
>  	if (unlikely(rte_pktmbuf_alloc_bulk(rxq->mb_pool, mbufs, nb_pkts)
> != 0))
>  		return 0;
> 
> @@ -317,9 +321,6 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs,
> uint16_t nb_pkts)
>  		goto out;
>  	}
> 
> -	if (xsk_prod_nb_free(fq, free_thresh) >= free_thresh)
> -		(void)reserve_fill_queue(umem,
> ETH_AF_XDP_RX_BATCH_SIZE, NULL);
> -
>  	for (i = 0; i < rcvd; i++) {
>  		const struct xdp_desc *desc;
>  		uint64_t addr;
> --
> 2.16.2
  
Li RongQing Sept. 18, 2020, 11:24 a.m. UTC | #2
> -----Original Message-----
> From: Loftus, Ciara [mailto:ciara.loftus@intel.com]
> Sent: Friday, September 18, 2020 5:27 PM
> To: Li,Rongqing <lirongqing@baidu.com>; dev@dpdk.org
> Cc: stable@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH] af_xdp: avoid deadlock due to empty fill queue
> 
> > when receive packets, it is possible to fail to reserve fill queue,
> > since buffer ring is shared between tx and rx, and maybe not available
> > temporary. at last, both fill queue and rx queue are empty.
> >
> > then kernel side will be unable to receive packets due to empty fill
> > queue, and dpdk will be unable to reserve fill queue because dpdk has
> > not pakcets to receive, at last deadlock will happen
> >
> > so move reserve fill queue before xsk_ring_cons__peek to fix it
> >
> > Signed-off-by: Li RongQing <lirongqing@baidu.com>
> 
> Thanks for the fix. I tested and saw no significant performance drop.
> 
> Minor: the first line of the commit should read "net/af_xdp: ...."
> 
> Acked-by: Ciara Loftus <ciara.loftus@intel.com>
> 
> CC-ing stable as I think this fix should be considered for inclusion.
> 
> Thanks,
> Ciara
> 

Thanks, I will send v2

-Li


> > ---
> >  drivers/net/af_xdp/rte_eth_af_xdp.c | 7 ++++---
> >  1 file changed, 4 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c
> > b/drivers/net/af_xdp/rte_eth_af_xdp.c
> > index 7ce4ad04a..2dc9cab27 100644
> > --- a/drivers/net/af_xdp/rte_eth_af_xdp.c
> > +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
> > @@ -304,6 +304,10 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs,
> > uint16_t nb_pkts)
> >  	uint32_t free_thresh = fq->size >> 1;
> >  	struct rte_mbuf *mbufs[ETH_AF_XDP_RX_BATCH_SIZE];
> >
> > +	if (xsk_prod_nb_free(fq, free_thresh) >= free_thresh)
> > +		(void)reserve_fill_queue(umem,
> > ETH_AF_XDP_RX_BATCH_SIZE, NULL);
> > +
> > +
> >  	if (unlikely(rte_pktmbuf_alloc_bulk(rxq->mb_pool, mbufs, nb_pkts) !=
> > 0))
> >  		return 0;
> >
> > @@ -317,9 +321,6 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs,
> > uint16_t nb_pkts)
> >  		goto out;
> >  	}
> >
> > -	if (xsk_prod_nb_free(fq, free_thresh) >= free_thresh)
> > -		(void)reserve_fill_queue(umem,
> > ETH_AF_XDP_RX_BATCH_SIZE, NULL);
> > -
> >  	for (i = 0; i < rcvd; i++) {
> >  		const struct xdp_desc *desc;
> >  		uint64_t addr;
> > --
> > 2.16.2
  

Patch

diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index 7ce4ad04a..2dc9cab27 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -304,6 +304,10 @@  af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	uint32_t free_thresh = fq->size >> 1;
 	struct rte_mbuf *mbufs[ETH_AF_XDP_RX_BATCH_SIZE];
 
+	if (xsk_prod_nb_free(fq, free_thresh) >= free_thresh)
+		(void)reserve_fill_queue(umem, ETH_AF_XDP_RX_BATCH_SIZE, NULL);
+
+
 	if (unlikely(rte_pktmbuf_alloc_bulk(rxq->mb_pool, mbufs, nb_pkts) != 0))
 		return 0;
 
@@ -317,9 +321,6 @@  af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		goto out;
 	}
 
-	if (xsk_prod_nb_free(fq, free_thresh) >= free_thresh)
-		(void)reserve_fill_queue(umem, ETH_AF_XDP_RX_BATCH_SIZE, NULL);
-
 	for (i = 0; i < rcvd; i++) {
 		const struct xdp_desc *desc;
 		uint64_t addr;