[PATCH v2 08/21] net/ena: perform Tx cleanup before sending pkts
Michal Krawczyk
mk at semihalf.com
Tue Feb 22 19:11:33 CET 2022
To increase likehood that current burst will fit in the HW rings,
perform Tx cleanup before pushing packets to the HW. It may increase
latency a bit for sparse bursts, but the Tx flow now should be more
smooth.
It's also common order in the Tx burst function for other PMDs.
Signed-off-by: Michal Krawczyk <mk at semihalf.com>
Reviewed-by: Dawid Gorecki <dgr at semihalf.com>
Reviewed-by: Shai Brandes <shaibran at amazon.com>
---
drivers/net/ena/ena_ethdev.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 4b82372155..ed3dd162ba 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -2776,6 +2776,10 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
}
#endif
+ available_desc = ena_com_free_q_entries(tx_ring->ena_com_io_sq);
+ if (available_desc < tx_ring->tx_free_thresh)
+ ena_tx_cleanup(tx_ring);
+
for (sent_idx = 0; sent_idx < nb_pkts; sent_idx++) {
if (ena_xmit_mbuf(tx_ring, tx_pkts[sent_idx]))
break;
@@ -2784,9 +2788,6 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_ring->size_mask)]);
}
- available_desc = ena_com_free_q_entries(tx_ring->ena_com_io_sq);
- tx_ring->tx_stats.available_desc = available_desc;
-
/* If there are ready packets to be xmitted... */
if (likely(tx_ring->pkts_without_db)) {
/* ...let HW do its best :-) */
@@ -2795,9 +2796,6 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_ring->pkts_without_db = false;
}
- if (available_desc < tx_ring->tx_free_thresh)
- ena_tx_cleanup(tx_ring);
-
tx_ring->tx_stats.available_desc =
ena_com_free_q_entries(tx_ring->ena_com_io_sq);
tx_ring->tx_stats.tx_poll++;
--
2.25.1
More information about the dev
mailing list