[v2,1/3] net/af_xdp: fix Tx halt when no recv packets
Checks
Commit Message
From: Xiaolong Ye <xiaolong.ye@intel.com>
The kernel only consumes Tx packets if we have some Rx traffic on specified
queue or we have called send(). So we need to issue a send() even when the
allocation fails so that kernel will start to consume packets again.
Commit 45bba02c95b0 ("net/af_xdp: support need wakeup feature") breaks
above rule by adding some condition to send, this patch fixes it while
still keeps the need_wakeup feature for Tx.
Fixes: 45bba02c95b0 ("net/af_xdp: support need wakeup feature")
Cc: stable@dpdk.org
Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
Tested-by: Ciara Loftus <ciara.loftus@intel.com>
---
drivers/net/af_xdp/rte_eth_af_xdp.c | 28 ++++++++++++++--------------
1 file changed, 14 insertions(+), 14 deletions(-)
Comments
Hi,
As this is indeed a kernel driver issue, and Magnus is working on the workaound/fix
in i40e kernel driver, as long as the workaound/fix can be merged in v5.4, this
patch can be dropped.
Thanks,
Xiaolong
On 09/30, Ciara Loftus wrote:
>From: Xiaolong Ye <xiaolong.ye@intel.com>
>
>The kernel only consumes Tx packets if we have some Rx traffic on specified
>queue or we have called send(). So we need to issue a send() even when the
>allocation fails so that kernel will start to consume packets again.
>
>Commit 45bba02c95b0 ("net/af_xdp: support need wakeup feature") breaks
>above rule by adding some condition to send, this patch fixes it while
>still keeps the need_wakeup feature for Tx.
>
>Fixes: 45bba02c95b0 ("net/af_xdp: support need wakeup feature")
>Cc: stable@dpdk.org
>
>Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
>Tested-by: Ciara Loftus <ciara.loftus@intel.com>
>---
> drivers/net/af_xdp/rte_eth_af_xdp.c | 28 ++++++++++++++--------------
> 1 file changed, 14 insertions(+), 14 deletions(-)
>
>diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
>index 41ed5b2af..e496e9aaa 100644
>--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
>+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
>@@ -286,19 +286,16 @@ kick_tx(struct pkt_tx_queue *txq)
> {
> struct xsk_umem_info *umem = txq->pair->umem;
>
>-#if defined(XDP_USE_NEED_WAKEUP)
>- if (xsk_ring_prod__needs_wakeup(&txq->tx))
>-#endif
>- while (send(xsk_socket__fd(txq->pair->xsk), NULL,
>- 0, MSG_DONTWAIT) < 0) {
>- /* some thing unexpected */
>- if (errno != EBUSY && errno != EAGAIN && errno != EINTR)
>- break;
>-
>- /* pull from completion queue to leave more space */
>- if (errno == EAGAIN)
>- pull_umem_cq(umem, ETH_AF_XDP_TX_BATCH_SIZE);
>- }
>+ while (send(xsk_socket__fd(txq->pair->xsk), NULL,
>+ 0, MSG_DONTWAIT) < 0) {
>+ /* some thing unexpected */
>+ if (errno != EBUSY && errno != EAGAIN && errno != EINTR)
>+ break;
>+
>+ /* pull from completion queue to leave more space */
>+ if (errno == EAGAIN)
>+ pull_umem_cq(umem, ETH_AF_XDP_TX_BATCH_SIZE);
>+ }
> pull_umem_cq(umem, ETH_AF_XDP_TX_BATCH_SIZE);
> }
>
>@@ -367,7 +364,10 @@ eth_af_xdp_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
>
> xsk_ring_prod__submit(&txq->tx, nb_pkts);
>
>- kick_tx(txq);
>+#if defined(XDP_USE_NEED_WAKEUP)
>+ if (xsk_ring_prod__needs_wakeup(&txq->tx))
>+#endif
>+ kick_tx(txq);
>
> txq->stats.tx_pkts += nb_pkts;
> txq->stats.tx_bytes += tx_bytes;
>--
>2.17.1
>
@@ -286,19 +286,16 @@ kick_tx(struct pkt_tx_queue *txq)
{
struct xsk_umem_info *umem = txq->pair->umem;
-#if defined(XDP_USE_NEED_WAKEUP)
- if (xsk_ring_prod__needs_wakeup(&txq->tx))
-#endif
- while (send(xsk_socket__fd(txq->pair->xsk), NULL,
- 0, MSG_DONTWAIT) < 0) {
- /* some thing unexpected */
- if (errno != EBUSY && errno != EAGAIN && errno != EINTR)
- break;
-
- /* pull from completion queue to leave more space */
- if (errno == EAGAIN)
- pull_umem_cq(umem, ETH_AF_XDP_TX_BATCH_SIZE);
- }
+ while (send(xsk_socket__fd(txq->pair->xsk), NULL,
+ 0, MSG_DONTWAIT) < 0) {
+ /* some thing unexpected */
+ if (errno != EBUSY && errno != EAGAIN && errno != EINTR)
+ break;
+
+ /* pull from completion queue to leave more space */
+ if (errno == EAGAIN)
+ pull_umem_cq(umem, ETH_AF_XDP_TX_BATCH_SIZE);
+ }
pull_umem_cq(umem, ETH_AF_XDP_TX_BATCH_SIZE);
}
@@ -367,7 +364,10 @@ eth_af_xdp_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
xsk_ring_prod__submit(&txq->tx, nb_pkts);
- kick_tx(txq);
+#if defined(XDP_USE_NEED_WAKEUP)
+ if (xsk_ring_prod__needs_wakeup(&txq->tx))
+#endif
+ kick_tx(txq);
txq->stats.tx_pkts += nb_pkts;
txq->stats.tx_bytes += tx_bytes;