[dpdk-dev] Fwd: [PATCH v3 2/2] eal/x86: Use lock-prefixed instructions to reduce cost of rte_smp_mb()

Ananyev, Konstantin konstantin.ananyev at intel.com
Mon Jan 29 10:29:52 CET 2018


Hi Michael,

> 
> On Mon, Jan 15, 2018 at 04:15:00PM +0100, Maxime Coquelin wrote:
> > Hi Michael,
> >
> > FYI:
> >
> > -------- Forwarded Message --------
> > Subject: [dpdk-dev] [PATCH v3 2/2] eal/x86: Use lock-prefixed instructions
> > to reduce cost of rte_smp_mb()
> > Date: Mon, 15 Jan 2018 15:09:31 +0000
> > From: Konstantin Ananyev <konstantin.ananyev at intel.com>
> > To: dev at dpdk.org
> > CC: Konstantin Ananyev <konstantin.ananyev at intel.com>
> >
> > On x86 it  is possible to use lock-prefixed instructions to get
> > the similar effect as mfence.
> > As pointed by Java guys, on most modern HW that gives a better
> > performance than using mfence:
> > https://shipilev.net/blog/2014/on-the-fence-with-dependencies/
> > That patch adopts that technique for rte_smp_mb() implementation.
> > On BDW 2.2 mb_autotest on single lcore reports 2X cycle reduction,
> > i.e. from ~110 to ~55 cycles per operation.
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev at intel.com>
> > Acked-by: Bruce Richardson <bruce.richardson at intel.com>
> > ---
> >  .../common/include/arch/x86/rte_atomic.h           | 44
> > +++++++++++++++++++++-
> >  1 file changed, 42 insertions(+), 2 deletions(-)
> >
> > diff --git a/lib/librte_eal/common/include/arch/x86/rte_atomic.h
> > b/lib/librte_eal/common/include/arch/x86/rte_atomic.h
> > index 8469f97e1..9d466d94a 100644
> > --- a/lib/librte_eal/common/include/arch/x86/rte_atomic.h
> > +++ b/lib/librte_eal/common/include/arch/x86/rte_atomic.h
> > @@ -26,12 +26,52 @@ extern "C" {
> >   #define	rte_rmb() _mm_lfence()
> >  -#define rte_smp_mb() rte_mb()
> > -
> >  #define rte_smp_wmb() rte_compiler_barrier()
> >   #define rte_smp_rmb() rte_compiler_barrier()
> >  +/*
> > + * From Intel Software Development Manual; Vol 3;
> > + * 8.2.2 Memory Ordering in P6 and More Recent Processor Families:
> > + * ...
> > + * . Reads are not reordered with other reads.
> > + * . Writes are not reordered with older reads.
> > + * . Writes to memory are not reordered with other writes,
> > + *   with the following exceptions:
> > + *   . streaming stores (writes) executed with the non-temporal move
> > + *     instructions (MOVNTI, MOVNTQ, MOVNTDQ, MOVNTPS, and MOVNTPD); and
> > + *   . string operations (see Section 8.2.4.1).
> > + *  ...
> > + * . Reads may be reordered with older writes to different locations but
> > not
> > + * with older writes to the same location.
> > + * . Reads or writes cannot be reordered with I/O instructions,
> > + * locked instructions, or serializing instructions.
> > + * . Reads cannot pass earlier LFENCE and MFENCE instructions.
> > + * . Writes ... cannot pass earlier LFENCE, SFENCE, and MFENCE
> > instructions.
> > + * . LFENCE instructions cannot pass earlier reads.
> > + * . SFENCE instructions cannot pass earlier writes ...
> > + * . MFENCE instructions cannot pass earlier reads, writes ...
> > + *
> > + * As pointed by Java guys, that makes possible to use lock-prefixed
> > + * instructions to get the same effect as mfence and on most modern HW
> > + * that gives a better perfomance then using mfence:
> > + * https://shipilev.net/blog/2014/on-the-fence-with-dependencies/
> > + * Basic idea is to use lock prefixed add with some dummy memory location
> > + * as the destination. From their experiments 128B(2 cache lines) below
> > + * current stack pointer looks like a good candidate.
> > + * So below we use that techinque for rte_smp_mb() implementation.
> > + */
> > +
> > +static __rte_always_inline void
> > +rte_smp_mb(void)
> > +{
> > +#ifdef RTE_ARCH_I686
> > +	asm volatile("lock addl $0, -128(%%esp); " ::: "memory");
> > +#else
> > +	asm volatile("lock addl $0, -128(%%rsp); " ::: "memory");
> > +#endif
> > +}
> > +
> >  #define rte_io_mb() rte_mb()
> >   #define rte_io_wmb() rte_compiler_barrier()
> 
> In my testing this appears to be suboptimal when the calling
> function is large. The following seems to work better:
> 
> +static __rte_always_inline void
> +rte_smp_mb(void)
> +{
> +#ifdef RTE_ARCH_I686
> +	asm volatile("lock addl $0, -132(%%esp); " ::: "memory");
> +#else
> +	asm volatile("lock addl $0, -132(%%rsp); " ::: "memory");
> +#endif
> +}
> +
> 
> The reason most likely is that 128 still overlaps the x86
> red zone by 4 bytes.

I tried what you suggested but for my cases didn't see any improvement so far.
Can you explain a bit more why do you expect it to be faster?
Probably some particular scenario?
Konstantin

> 
> Feel free to reuse, and add
> Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
> 
> 
> > --
> > 2.13.6


More information about the dev mailing list