[dpdk-dev,v6] ring: guarantee load/load order in enqueue and dequeue

Message ID 1510284642-7442-2-git-send-email-hejianet@gmail.com (mailing list archive)
State Accepted, archived
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Jia He Nov. 10, 2017, 3:30 a.m. UTC
  We watched a rte panic of mbuf_autotest in our qualcomm arm64 server
(Amberwing).

Root cause:
In __rte_ring_move_cons_head()
...
        do {
                /* Restore n as it may change every loop */
                n = max;

                *old_head = r->cons.head;                //1st load
                const uint32_t prod_tail = r->prod.tail; //2nd load

In weak memory order architectures(powerpc,arm), the 2nd load might be
reodered before the 1st load, that makes *entries is bigger than we wanted.
This nasty reording messed enque/deque up.

cpu1(producer)          cpu2(consumer)          cpu3(consumer)
                        load r->prod.tail
in enqueue:
load r->cons.tail
load r->prod.head

store r->prod.tail

                                                load r->cons.head
                                                load r->prod.tail
                                                ...
                                                store r->cons.{head,tail}
                        load r->cons.head

Then, r->cons.head will be bigger than prod_tail, then make *entries very
big and the consumer will go forward incorrectly.

After this patch, the old cons.head will be recaculated after failure of
rte_atomic32_cmpset

There is no such issue on X86, because X86 is strong memory order model.
But rte_smp_rmb() doesn't have impact on runtime performance on X86, so
keep the same code without architectures specific concerns.

Signed-off-by: Jia He <jia.he@hxt-semitech.com>
Signed-off-by: jie2.liu@hxt-semitech.com
Signed-off-by: bing.zhao@hxt-semitech.com
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Jianbo Liu <jianbo.liu@arm.com>
Cc: stable@dpdk.org

---
 lib/librte_ring/rte_ring.h | 12 ++++++++++++
 1 file changed, 12 insertions(+)
  

Comments

Thomas Monjalon Nov. 12, 2017, 5:51 p.m. UTC | #1
10/11/2017 04:30, Jia He:
> We watched a rte panic of mbuf_autotest in our qualcomm arm64 server
> (Amberwing).
> 
> Root cause:
> In __rte_ring_move_cons_head()
> ...
>         do {
>                 /* Restore n as it may change every loop */
>                 n = max;
> 
>                 *old_head = r->cons.head;                //1st load
>                 const uint32_t prod_tail = r->prod.tail; //2nd load
> 
> In weak memory order architectures(powerpc,arm), the 2nd load might be
> reodered before the 1st load, that makes *entries is bigger than we wanted.
> This nasty reording messed enque/deque up.
> 
> cpu1(producer)          cpu2(consumer)          cpu3(consumer)
>                         load r->prod.tail
> in enqueue:
> load r->cons.tail
> load r->prod.head
> 
> store r->prod.tail
> 
>                                                 load r->cons.head
>                                                 load r->prod.tail
>                                                 ...
>                                                 store r->cons.{head,tail}
>                         load r->cons.head
> 
> Then, r->cons.head will be bigger than prod_tail, then make *entries very
> big and the consumer will go forward incorrectly.
> 
> After this patch, the old cons.head will be recaculated after failure of
> rte_atomic32_cmpset
> 
> There is no such issue on X86, because X86 is strong memory order model.
> But rte_smp_rmb() doesn't have impact on runtime performance on X86, so
> keep the same code without architectures specific concerns.
> 
> Signed-off-by: Jia He <jia.he@hxt-semitech.com>
> Signed-off-by: jie2.liu@hxt-semitech.com
> Signed-off-by: bing.zhao@hxt-semitech.com
> Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Acked-by: Jianbo Liu <jianbo.liu@arm.com>
> Cc: stable@dpdk.org

Applied, thanks
  

Patch

diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index 5e9b3b7..e924438 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -409,6 +409,12 @@  __rte_ring_move_prod_head(struct rte_ring *r, int is_sp,
 		n = max;
 
 		*old_head = r->prod.head;
+
+		/* add rmb barrier to avoid load/load reorder in weak
+		 * memory model. It is noop on x86
+		 */
+		rte_smp_rmb();
+
 		const uint32_t cons_tail = r->cons.tail;
 		/*
 		 *  The subtraction is done between two unsigned 32bits value
@@ -517,6 +523,12 @@  __rte_ring_move_cons_head(struct rte_ring *r, int is_sc,
 		n = max;
 
 		*old_head = r->cons.head;
+
+		/* add rmb barrier to avoid load/load reorder in weak
+		 * memory model. It is noop on x86
+		 */
+		rte_smp_rmb();
+
 		const uint32_t prod_tail = r->prod.tail;
 		/* The subtraction is done between two unsigned 32bits value
 		 * (the result is always modulo 32 bits even if we have