[dpdk-dev] [PATCH] ring: guarantee ordering of cons/prod loading when doing enqueue/dequeue

Kuusisaari, Juhamatti Juhamatti.Kuusisaari at coriant.com
Mon Oct 16 12:51:34 CEST 2017



> -----Original Message-----
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Liu, Jie2
> Sent: Friday, October 13, 2017 3:25 AM
> To: Ananyev, Konstantin <konstantin.ananyev at intel.com>; Olivier MATZ
> <olivier.matz at 6wind.com>; dev at dpdk.org;
> jerin.jacob at caviumnetworks.com
> Cc: He, Jia <jia.he at hxt-semitech.com>; Zhao, Bing <bing.zhao at hxt-
> semitech.com>; Jia He <hejianet at gmail.com>
> Subject: Re: [dpdk-dev] [PATCH] ring: guarantee ordering of cons/prod
> loading when doing enqueue/dequeue
> 
> Hi guys,
> We found this issue when we run mbuf_autotest. It failed on a aarch64
> platform. I am not sure if it can be reproduced on other platforms.
> Regards,
> Jie Liu
> 
> -----Original Message-----
> From: Ananyev, Konstantin [mailto:konstantin.ananyev at intel.com]
> Sent: 2017年10月13日 1:06
> To: Olivier MATZ <olivier.matz at 6wind.com>; Jia He <hejianet at gmail.com>
> Cc: dev at dpdk.org; He, Jia <jia.he at hxt-semitech.com>; Liu, Jie2
> <jie2.liu at hxt-semitech.com>; Zhao, Bing <bing.zhao at hxt-semitech.com>;
> jerin.jacob at caviumnetworks.com
> Subject: RE: [PATCH] ring: guarantee ordering of cons/prod loading when
> doing enqueue/dequeue
> 
> Hi guys,
> 
> > -----Original Message-----
> > From: Olivier MATZ [mailto:olivier.matz at 6wind.com]
> > Sent: Thursday, October 12, 2017 4:54 PM
> > To: Jia He <hejianet at gmail.com>
> > Cc: dev at dpdk.org; jia.he at hxt-semitech.com; jie2.liu at hxt-semitech.com;
> > bing.zhao at hxt-semitech.com; Ananyev, Konstantin
> > <konstantin.ananyev at intel.com>; jerin.jacob at caviumnetworks.com
> > Subject: Re: [PATCH] ring: guarantee ordering of cons/prod loading
> > when doing enqueue/dequeue
> >
> > Hi,
> >
> > On Tue, Oct 10, 2017 at 05:56:36PM +0800, Jia He wrote:
> > > Before this patch:
> > > In __rte_ring_move_cons_head()
> > > ...
> > >         do {
> > >                 /* Restore n as it may change every loop */
> > >                 n = max;
> > >
> > >                 *old_head = r->cons.head;                //1st load
> > >                 const uint32_t prod_tail = r->prod.tail; //2nd load
> > >
> > > In weak memory order architectures(powerpc,arm), the 2nd load might
> > > be reodered before the 1st load, that makes *entries is bigger than we
> wanted.
> > > This nasty reording messed enque/deque up.
> > >
> > > cpu1(producer)          cpu2(consumer)          cpu3(consumer)
> > >                         load r->prod.tail in enqueue:
> > > load r->cons.tail
> > > load r->prod.head
> > >
> > > store r->prod.tail
> > >
> > >                                                 load r->cons.head
> > >                                                 load r->prod.tail
> > >                                                 ...
> > >                                                 store r->cons.{head,tail}
> > >                         load r->cons.head
> > >
> > > THEN,r->cons.head will be bigger than prod_tail, then make *entries
> > > very big
> > >
> > > After this patch, the old cons.head will be recaculated after
> > > failure of rte_atomic32_cmpset
> > >
> > > There is no such issue in X86 cpu, because X86 is strong memory
> > > order model
> > >
> > > Signed-off-by: Jia He <hejianet at gmail.com>
> > > Signed-off-by: jia.he at hxt-semitech.com
> > > Signed-off-by: jie2.liu at hxt-semitech.com
> > > Signed-off-by: bing.zhao at hxt-semitech.com
> > >
> > > ---
> > >  lib/librte_ring/rte_ring.h | 8 ++++++++
> > >  1 file changed, 8 insertions(+)
> > >
> > > diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
> > > index 5e9b3b7..15c72e2 100644
> > > --- a/lib/librte_ring/rte_ring.h
> > > +++ b/lib/librte_ring/rte_ring.h
> > > @@ -409,6 +409,10 @@ __rte_ring_move_prod_head(struct rte_ring *r,
> > > int is_sp,  n = max;
> > >
> > >  *old_head = r->prod.head;
> > > +
> > > +/* load of prod.tail can't be reordered before cons.head */
> > > +rte_smp_rmb();
> > > +
> > >  const uint32_t cons_tail = r->cons.tail;
> > >  /*
> > >   *  The subtraction is done between two unsigned 32bits value @@
> > > -517,6 +521,10 @@ __rte_ring_move_cons_head(struct rte_ring *r, int
> > > is_sc,  n = max;
> > >
> > >  *old_head = r->cons.head;
> > > +
> > > +/* load of prod.tail can't be reordered before cons.head */
> > > +rte_smp_rmb();
> > > +
> > >  const uint32_t prod_tail = r->prod.tail;
> > >  /* The subtraction is done between two unsigned 32bits value
> > >   * (the result is always modulo 32 bits even if we have
> > > --
> > > 2.7.4
> > >
> >
> > The explanation convinces me.
> >
> > However, since it's in a critical path, it would be good to have other
> > opinions. This patch reminds me this discussion, that was also related
> > to memory barrier, but at another place:
> > http://dpdk.org/ml/archives/dev/2016-July/043765.html
> > Lead to that patch:
> > http://dpdk.org/browse/dpdk/commit/?id=ecc7d10e448e
> > But finally reverted:
> > http://dpdk.org/browse/dpdk/commit/?id=c3acd92746c3
> >
> > Konstatin, Jerin, do you have any comment?
> 
> For IA, as rte_smp_rmb() is just a compiler_barrier, that patch shouldn't
> make any difference, but  I can't see how read reordering would screw things
> up here...
> Probably just me and arm or ppc guys could explain what will be the problem
> if let say cons.tail will be read before prod.head in
> __rte_ring_move_prod_head().
> I wonder Is there a simple test-case to reproduce that problem (on arm or
> ppc)?
> Probably new test-case for rte_ring autotest is needed, or is it possible to
> reproduce it with existing one?
> Konstantin


Hi,

I think this is a real problem here. We have fixed it so that we read both head and tail atomically as otherwise you may have this problem you are seeing. Works only on 64, of course.

Thanks for pointing out the revert, I was unaware of it:
http://dpdk.org/browse/dpdk/commit/?id=c3acd92746c3

Nevertheless, the original fix was as such correct on higher level, both PPC and ARM just happened to use strong enough default read barrier which already guaranteed the ordering to be good enough. As we optimize cycles here, I am of course just fine with the revert as long as we all remember what was going on there.

BR,
--
 Juhamatti



More information about the dev mailing list