[dpdk-stable] patch 'sched: fix port time rounding' has been queued to LTS release 18.11.10

Kevin Traynor ktraynor at redhat.com
Fri Jul 17 18:32:09 CEST 2020


Hi,

FYI, your patch has been queued to LTS release 18.11.10

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 07/23/20. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Queued patches are on a temporary branch at:
https://github.com/kevintraynor/dpdk-stable-queue

This queued commit can be viewed at:
https://github.com/kevintraynor/dpdk-stable-queue/commit/38f30a1f450b56666f5339ee4e68f795cdf51b06

Thanks.

Kevin.

---
>From 38f30a1f450b56666f5339ee4e68f795cdf51b06 Mon Sep 17 00:00:00 2001
From: Alan Dewar <alan.dewar at att.com>
Date: Thu, 25 Jun 2020 10:59:30 +0100
Subject: [PATCH] sched: fix port time rounding

[ upstream commit 83415d4fd88c925002655aa755601998a3cdef2c ]

The QoS scheduler works off port time that is computed from the number
of CPU cycles that have elapsed since the last time the port was
polled.   It divides the number of elapsed cycles to calculate how
many bytes can be sent, however this division can generate rounding
errors, where some fraction of a byte sent may be lost.

Lose enough of these fractional bytes and the QoS scheduler
underperforms.  The problem is worse with low bandwidths.

To compensate for this rounding error this fix doesn't advance the
port's time_cpu_cycles by the number of cycles that have elapsed,
but by multiplying the computed number of bytes that can be sent
(which has been rounded down) by number of cycles per byte.
This will mean that port's time_cpu_cycles will lag behind the CPU
cycles momentarily.  At the next poll, the lag will be taken into
account.

Fixes: de3cfa2c98 ("sched: initial import")

Signed-off-by: Alan Dewar <alan.dewar at att.com>
Acked-by: Jasvinder Singh <jasvinder.singh at intel.com>
---
 lib/librte_sched/rte_sched.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index 89c3d1e7f7..d8ace20fe0 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -201,4 +201,5 @@ struct rte_sched_port {
 	uint64_t time;                /* Current NIC TX time measured in bytes */
 	struct rte_reciprocal inv_cycles_per_byte; /* CPU cycles per byte */
+	uint64_t cycles_per_byte;
 
 	/* Scheduling loop detection */
@@ -683,4 +684,5 @@ rte_sched_port_config(struct rte_sched_port_params *params)
 		/ params->rate;
 	port->inv_cycles_per_byte = rte_reciprocal_value(cycles_per_byte);
+	port->cycles_per_byte = cycles_per_byte;
 
 	/* Scheduling loop detection */
@@ -2191,7 +2193,11 @@ rte_sched_port_time_resync(struct rte_sched_port *port)
 {
 	uint64_t cycles = rte_get_tsc_cycles();
-	uint64_t cycles_diff = cycles - port->time_cpu_cycles;
+	uint64_t cycles_diff;
 	uint64_t bytes_diff;
 
+	if (cycles < port->time_cpu_cycles)
+		port->time_cpu_cycles = 0;
+
+	cycles_diff = cycles - port->time_cpu_cycles;
 	/* Compute elapsed time in bytes */
 	bytes_diff = rte_reciprocal_divide(cycles_diff << RTE_SCHED_TIME_SHIFT,
@@ -2199,5 +2205,6 @@ rte_sched_port_time_resync(struct rte_sched_port *port)
 
 	/* Advance port time */
-	port->time_cpu_cycles = cycles;
+	port->time_cpu_cycles +=
+		(bytes_diff * port->cycles_per_byte) >> RTE_SCHED_TIME_SHIFT;
 	port->time_cpu_bytes += bytes_diff;
 	if (port->time < port->time_cpu_bytes)
-- 
2.21.3

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2020-07-17 17:17:01.165113282 +0100
+++ 0021-sched-fix-port-time-rounding.patch	2020-07-17 17:16:59.997771060 +0100
@@ -1 +1 @@
-From 83415d4fd88c925002655aa755601998a3cdef2c Mon Sep 17 00:00:00 2001
+From 38f30a1f450b56666f5339ee4e68f795cdf51b06 Mon Sep 17 00:00:00 2001
@@ -5,0 +6,2 @@
+[ upstream commit 83415d4fd88c925002655aa755601998a3cdef2c ]
+
@@ -24 +25,0 @@
-Cc: stable at dpdk.org
@@ -33 +34 @@
-index 68a171b508..0fa0741664 100644
+index 89c3d1e7f7..d8ace20fe0 100644
@@ -36 +37 @@
-@@ -223,4 +223,5 @@ struct rte_sched_port {
+@@ -201,4 +201,5 @@ struct rte_sched_port {
@@ -41,2 +42,2 @@
- 	/* Grinders */
-@@ -853,4 +854,5 @@ rte_sched_port_config(struct rte_sched_port_params *params)
+ 	/* Scheduling loop detection */
+@@ -683,4 +684,5 @@ rte_sched_port_config(struct rte_sched_port_params *params)
@@ -47,2 +48,2 @@
- 	/* Grinders */
-@@ -2674,8 +2676,12 @@ rte_sched_port_time_resync(struct rte_sched_port *port)
+ 	/* Scheduling loop detection */
+@@ -2191,7 +2193,11 @@ rte_sched_port_time_resync(struct rte_sched_port *port)
@@ -54 +54,0 @@
- 	uint32_t i;
@@ -62 +62 @@
-@@ -2683,5 +2689,6 @@ rte_sched_port_time_resync(struct rte_sched_port *port)
+@@ -2199,5 +2205,6 @@ rte_sched_port_time_resync(struct rte_sched_port *port)



More information about the stable mailing list