[dpdk-dev] event/sw: fix hashing of flow on ordered ingress

Message ID 1491839803-172566-1-git-send-email-harry.van.haaren@intel.com (mailing list archive)
State Accepted, archived
Delegated to: Thomas Monjalon
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Van Haaren, Harry April 10, 2017, 3:56 p.m. UTC
  The flow id of packets was not being hashed on ingress
on an ordered queue. Fix by applying same hashing as is
applied in the atomic queue case. The hashing itself is
broken out into a macro to avoid duplication of code.

Fixes: 617995dfc5b2 ("event/sw: add scheduling logic")

Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
---
 drivers/event/sw/sw_evdev_scheduler.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)
  

Comments

Bruce Richardson April 13, 2017, 12:31 p.m. UTC | #1
On Mon, Apr 10, 2017 at 04:56:43PM +0100, Harry van Haaren wrote:
> The flow id of packets was not being hashed on ingress
> on an ordered queue. Fix by applying same hashing as is
> applied in the atomic queue case. The hashing itself is
> broken out into a macro to avoid duplication of code.
> 
> Fixes: 617995dfc5b2 ("event/sw: add scheduling logic")
> 
> Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
> ---
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
  
Thomas Monjalon April 19, 2017, 10:24 p.m. UTC | #2
13/04/2017 14:31, Bruce Richardson:
> On Mon, Apr 10, 2017 at 04:56:43PM +0100, Harry van Haaren wrote:
> > The flow id of packets was not being hashed on ingress
> > on an ordered queue. Fix by applying same hashing as is
> > applied in the atomic queue case. The hashing itself is
> > broken out into a macro to avoid duplication of code.
> > 
> > Fixes: 617995dfc5b2 ("event/sw: add scheduling logic")
> > 
> > Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
> > ---
> 
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>

Applied, thanks
  

Patch

diff --git a/drivers/event/sw/sw_evdev_scheduler.c b/drivers/event/sw/sw_evdev_scheduler.c
index 77a16d7..e008b51 100644
--- a/drivers/event/sw/sw_evdev_scheduler.c
+++ b/drivers/event/sw/sw_evdev_scheduler.c
@@ -51,6 +51,8 @@ 
 
 #define MAX_PER_IQ_DEQUEUE 48
 #define FLOWID_MASK (SW_QID_NUM_FIDS-1)
+/* use cheap bit mixing, we only need to lose a few bits */
+#define SW_HASH_FLOWID(f) (((f) ^ (f >> 10)) & FLOWID_MASK)
 
 static inline uint32_t
 sw_schedule_atomic_to_cq(struct sw_evdev *sw, struct sw_qid * const qid,
@@ -72,9 +74,7 @@  sw_schedule_atomic_to_cq(struct sw_evdev *sw, struct sw_qid * const qid,
 	iq_ring_dequeue_burst(qid->iq[iq_num], qes, count);
 	for (i = 0; i < count; i++) {
 		const struct rte_event *qe = &qes[i];
-		/* use cheap bit mixing, we only need to lose a few bits */
-		uint32_t flow_id32 = (qes[i].flow_id) ^ (qes[i].flow_id >> 10);
-		const uint16_t flow_id = FLOWID_MASK & flow_id32;
+		const uint16_t flow_id = SW_HASH_FLOWID(qes[i].flow_id);
 		struct sw_fid_t *fid = &qid->fids[flow_id];
 		int cq = fid->cq;
 
@@ -183,8 +183,7 @@  sw_schedule_parallel_to_cq(struct sw_evdev *sw, struct sw_qid * const qid,
 		qid->stats.tx_pkts++;
 
 		const int head = (p->hist_head & (SW_PORT_HIST_LIST-1));
-
-		p->hist_list[head].fid = qe->flow_id;
+		p->hist_list[head].fid = SW_HASH_FLOWID(qe->flow_id);
 		p->hist_list[head].qid = qid_id;
 
 		if (keep_order)