[dpdk-stable] [PATCH] net/af_xdp: fix Tx halt when no recv packets

Zhang, Qi Z qi.z.zhang at intel.com
Tue Sep 10 06:14:06 CEST 2019



> -----Original Message-----
> From: Ye, Xiaolong
> Sent: Tuesday, September 10, 2019 12:13 AM
> To: Yigit, Ferruh <ferruh.yigit at intel.com>; Loftus, Ciara
> <ciara.loftus at intel.com>; Ye, Xiaolong <xiaolong.ye at intel.com>; Zhang, Qi Z
> <qi.z.zhang at intel.com>
> Cc: dev at dpdk.org; stable at dpdk.org
> Subject: [PATCH] net/af_xdp: fix Tx halt when no recv packets
> 
> The kernel only consumes Tx packets if we have some Rx traffic on specified
> queue or we have called send(). So we need to issue a send() even when the
> allocation fails so that kernel will start to consume packets again.

So "allocation fails" means " xsk_ring_prod__reserve" fail right?
I don't understand when xsk_ring_prod__needs_wakeup is true why kernel will stop Tx packet at this situation 
would you share more insight?

Thanks
Qi

> 
> Commit 45bba02c95b0 ("net/af_xdp: support need wakeup feature") breaks
> above rule by adding some condition to send, this patch fixes it while still
> keeps the need_wakeup feature for Tx.
> 
> Fixes: 45bba02c95b0 ("net/af_xdp: support need wakeup feature")
> Cc: stable at dpdk.org
> 
> Signed-off-by: Xiaolong Ye <xiaolong.ye at intel.com>
> ---
>  drivers/net/af_xdp/rte_eth_af_xdp.c | 28 ++++++++++++++--------------
>  1 file changed, 14 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c
> b/drivers/net/af_xdp/rte_eth_af_xdp.c
> index 41ed5b2af..e496e9aaa 100644
> --- a/drivers/net/af_xdp/rte_eth_af_xdp.c
> +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
> @@ -286,19 +286,16 @@ kick_tx(struct pkt_tx_queue *txq)  {
>  	struct xsk_umem_info *umem = txq->pair->umem;
> 
> -#if defined(XDP_USE_NEED_WAKEUP)
> -	if (xsk_ring_prod__needs_wakeup(&txq->tx))
> -#endif
> -		while (send(xsk_socket__fd(txq->pair->xsk), NULL,
> -			    0, MSG_DONTWAIT) < 0) {
> -			/* some thing unexpected */
> -			if (errno != EBUSY && errno != EAGAIN && errno != EINTR)
> -				break;
> -
> -			/* pull from completion queue to leave more space */
> -			if (errno == EAGAIN)
> -				pull_umem_cq(umem, ETH_AF_XDP_TX_BATCH_SIZE);
> -		}
> +	while (send(xsk_socket__fd(txq->pair->xsk), NULL,
> +		    0, MSG_DONTWAIT) < 0) {
> +		/* some thing unexpected */
> +		if (errno != EBUSY && errno != EAGAIN && errno != EINTR)
> +			break;
> +
> +		/* pull from completion queue to leave more space */
> +		if (errno == EAGAIN)
> +			pull_umem_cq(umem, ETH_AF_XDP_TX_BATCH_SIZE);
> +	}
>  	pull_umem_cq(umem, ETH_AF_XDP_TX_BATCH_SIZE);  }
> 
> @@ -367,7 +364,10 @@ eth_af_xdp_tx(void *queue, struct rte_mbuf
> **bufs, uint16_t nb_pkts)
> 
>  	xsk_ring_prod__submit(&txq->tx, nb_pkts);
> 
> -	kick_tx(txq);
> +#if defined(XDP_USE_NEED_WAKEUP)
> +	if (xsk_ring_prod__needs_wakeup(&txq->tx))
> +#endif
> +		kick_tx(txq);
> 
>  	txq->stats.tx_pkts += nb_pkts;
>  	txq->stats.tx_bytes += tx_bytes;
> --
> 2.17.1



More information about the stable mailing list