[dpdk-stable] [PATCH] net/af_xdp: fix Tx halt when no recv packets

Loftus, Ciara ciara.loftus at intel.com
Tue Sep 17 11:13:44 CEST 2019


> 
> The kernel only consumes Tx packets if we have some Rx traffic on specified
> queue or we have called send(). So we need to issue a send() even when the
> allocation fails so that kernel will start to consume packets again.
> 
> Commit 45bba02c95b0 ("net/af_xdp: support need wakeup feature") breaks
> above rule by adding some condition to send, this patch fixes it while still
> keeps the need_wakeup feature for Tx.
> 
> Fixes: 45bba02c95b0 ("net/af_xdp: support need wakeup feature")
> Cc: stable at dpdk.org
> 
> Signed-off-by: Xiaolong Ye <xiaolong.ye at intel.com>

Thanks for the patch Xiaolong.

Verified that this resolved an issue whereby when transmitting in one direction from a NIC PMD to the AF_XDP PMD, the AF_XDP PMD would stop transmitting after a short time.

Tested-by: Ciara Loftus <ciara.loftus at intel.com>

Thanks,
Ciara

> ---
>  drivers/net/af_xdp/rte_eth_af_xdp.c | 28 ++++++++++++++--------------
>  1 file changed, 14 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c
> b/drivers/net/af_xdp/rte_eth_af_xdp.c
> index 41ed5b2af..e496e9aaa 100644
> --- a/drivers/net/af_xdp/rte_eth_af_xdp.c
> +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
> @@ -286,19 +286,16 @@ kick_tx(struct pkt_tx_queue *txq)  {
>  	struct xsk_umem_info *umem = txq->pair->umem;
> 
> -#if defined(XDP_USE_NEED_WAKEUP)
> -	if (xsk_ring_prod__needs_wakeup(&txq->tx))
> -#endif
> -		while (send(xsk_socket__fd(txq->pair->xsk), NULL,
> -			    0, MSG_DONTWAIT) < 0) {
> -			/* some thing unexpected */
> -			if (errno != EBUSY && errno != EAGAIN && errno !=
> EINTR)
> -				break;
> -
> -			/* pull from completion queue to leave more space
> */
> -			if (errno == EAGAIN)
> -				pull_umem_cq(umem,
> ETH_AF_XDP_TX_BATCH_SIZE);
> -		}
> +	while (send(xsk_socket__fd(txq->pair->xsk), NULL,
> +		    0, MSG_DONTWAIT) < 0) {
> +		/* some thing unexpected */
> +		if (errno != EBUSY && errno != EAGAIN && errno != EINTR)
> +			break;
> +
> +		/* pull from completion queue to leave more space */
> +		if (errno == EAGAIN)
> +			pull_umem_cq(umem,
> ETH_AF_XDP_TX_BATCH_SIZE);
> +	}
>  	pull_umem_cq(umem, ETH_AF_XDP_TX_BATCH_SIZE);  }
> 
> @@ -367,7 +364,10 @@ eth_af_xdp_tx(void *queue, struct rte_mbuf
> **bufs, uint16_t nb_pkts)
> 
>  	xsk_ring_prod__submit(&txq->tx, nb_pkts);
> 
> -	kick_tx(txq);
> +#if defined(XDP_USE_NEED_WAKEUP)
> +	if (xsk_ring_prod__needs_wakeup(&txq->tx))
> +#endif
> +		kick_tx(txq);
> 
>  	txq->stats.tx_pkts += nb_pkts;
>  	txq->stats.tx_bytes += tx_bytes;
> --
> 2.17.1



More information about the stable mailing list