[dpdk-stable] patch 'eventdev: add event buffer flush in Rx adapter' has been queued to LTS release 17.11.4

Yongseok Koh yskoh at mellanox.com
Fri Jul 27 04:09:08 CEST 2018


Hi,

FYI, your patch has been queued to LTS release 17.11.4

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 07/28/18. So please
shout if anyone has objections.

Thanks.

Yongseok

---
>From 9b4c8ab2cfbe9ed232c417cdc4a78f66e12374a5 Mon Sep 17 00:00:00 2001
From: Nikhil Rao <nikhil.rao at intel.com>
Date: Sun, 3 Jun 2018 18:12:25 +0530
Subject: [PATCH] eventdev: add event buffer flush in Rx adapter

[ upstream commit 6b83f59355437c0631a64e5ecb9f080c17a8ba24 ]

Add an event buffer flush when the current invocation
of the Rx adapter is completed.

This patch provides lower latency in case there is a
BATCH_SIZE of events in the event buffer.

Suggested-by: Narender Vangati <narender.vangati at intel.com>
Signed-off-by: Nikhil Rao <nikhil.rao at intel.com>
Acked-by: Jerin Jacob <jerin.jacob at caviumnetworks.com>
---
 lib/librte_eventdev/rte_event_eth_rx_adapter.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/lib/librte_eventdev/rte_event_eth_rx_adapter.c b/lib/librte_eventdev/rte_event_eth_rx_adapter.c
index 1cdbb848b..377db42d9 100644
--- a/lib/librte_eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/librte_eventdev/rte_event_eth_rx_adapter.c
@@ -476,7 +476,7 @@ fill_event_buffer(struct rte_event_eth_rx_adapter *rx_adapter,
  * the hypervisor's switching layer where adjustments can be made to deal with
  * it.
  */
-static inline uint32_t
+static inline void
 eth_rx_poll(struct rte_event_eth_rx_adapter *rx_adapter)
 {
 	uint32_t num_queue;
@@ -505,7 +505,7 @@ eth_rx_poll(struct rte_event_eth_rx_adapter *rx_adapter)
 			flush_event_buffer(rx_adapter);
 		if (BATCH_SIZE > (ETH_EVENT_BUFFER_SIZE - buf->count)) {
 			rx_adapter->wrr_pos = wrr_pos;
-			break;
+			return;
 		}
 
 		stats->rx_poll_count++;
@@ -521,7 +521,7 @@ eth_rx_poll(struct rte_event_eth_rx_adapter *rx_adapter)
 			if (nb_rx > max_nb_rx) {
 				rx_adapter->wrr_pos =
 				    (wrr_pos + 1) % rx_adapter->wrr_len;
-				return nb_rx;
+				break;
 			}
 		}
 
@@ -529,20 +529,18 @@ eth_rx_poll(struct rte_event_eth_rx_adapter *rx_adapter)
 			wrr_pos = 0;
 	}
 
-	return nb_rx;
+	if (buf->count >= BATCH_SIZE)
+		flush_event_buffer(rx_adapter);
 }
 
 static int
 event_eth_rx_adapter_service_func(void *args)
 {
 	struct rte_event_eth_rx_adapter *rx_adapter = args;
-	struct rte_eth_event_enqueue_buffer *buf;
 
-	buf = &rx_adapter->event_enqueue_buffer;
 	if (rte_spinlock_trylock(&rx_adapter->rx_lock) == 0)
 		return 0;
-	if (eth_rx_poll(rx_adapter) == 0 && buf->count)
-		flush_event_buffer(rx_adapter);
+	eth_rx_poll(rx_adapter);
 	rte_spinlock_unlock(&rx_adapter->rx_lock);
 	return 0;
 }
-- 
2.11.0



More information about the stable mailing list