[dpdk-dev] [PATCH] atomic: clarify use of memory barriers

Ananyev, Konstantin konstantin.ananyev at intel.com
Tue May 20 18:35:10 CEST 2014


Hi Oliver,

>- optimize some code to avoid a real memory barrier when not required    (timers, virtio, ...)

That seems like a good thing to me.

> - make the code more readable to distinguish between the 2 kinds of memory barrier.

That part seems a bit misleading to me.
rte_compiler_barier() - is a barrier just for a compiler, not for real cpu. 
It only guarantees that the compiler wouldn't reorder instructions across it while emitting the code.

Looking at Intel Memory Ordering rules (Intel System PG, section 8.2):

1) Reads may be reordered with older writes to different locations but not with older writes to the same location.

So with the following fragment of code:
int a;
extern int *x, *y;
L0:
*y = 0;
rte_compiler_barrier();
L1:
 a = *x;

There is no guarantee that store at L0 will always be finished before load at L1.
Which means to me that rte_smp_mb() can't be identical to compiler_barrier, but should be real 'mfence' instruction instead.  

2) Writes to memory are not reordered with other writes, with the following exceptions:
   ...
   streaming stores (writes) executed with the non-temporal move instructions (MOVNTI, MOVNTQ, MOVNTDQ, MOVNTPS, and MOVNTPD); 
   ...

So with the following fragment of code:
extern int *x;
extern  __128i a, *p;
L0: 
_mm_stream_si128( p, a);
rte_compiler_barrier();
L1:
*x = 0;

There is no guarantee that store at L0 will always be finished before store at L1.
Which means to me that rte_smp_wmb() can't be identical to compiler_barrier, but should be real 'sfence' instruction instead.  

The only replacement that seems safe to me is:
#define	rte_smp_rmb() rte_compiler_barrier()

But now, there seems a confusion: everyone has to remember that smp_mb() and smp_wmb() are 'real' fences, while smp_rmb() is not.
That's why my suggestion was to simply keep using compiler_barrier() for all cases, when we don't need real fence.

Thanks
Konstantin

-----Original Message-----
From: Olivier MATZ [mailto:olivier.matz at 6wind.com] 
Sent: Tuesday, May 20, 2014 1:13 PM
To: Ananyev, Konstantin; dev at dpdk.org
Subject: Re: [dpdk-dev] [PATCH] atomic: clarify use of memory barriers

Hi Konstantin,

Thank you for your review and feedback.

On 05/20/2014 12:05 PM, Ananyev, Konstantin wrote:
>> Note that on x86 CPUs, memory barriers between different cores can be guaranteed by a simple compiler barrier.
>
> I don't think this is totally correct.
> Yes, for  Intel cpus in many cases memory barrier could be avoided due to nearly strict memory ordering.
> Though there are few cases where reordering is possible and when fence instructions would be needed.

I tried to mimic the behavior of linux that differentiates *mb() from
smp_*mb(), but I did too fast. In linux, we have [1]:

   smp_mb() = mb() = asm volatile("mfence":::"memory")
   smp_rmb() = compiler_barrier()
   smp_wmb() = compiler_barrier()

At least this should fixed in the patch. By the way, just for reference,
the idea of the patch came from a discussion we had on the list [2].

> For me:
> +#define	rte_smp_rmb() rte_compiler_barrier()
> Seems a bit misleading, as there is no real fence.
> So I suggest we keep rte_compiler_barrier() naming and usage.

The objectives of the patch (which was probably not explained very
clearly in the commit log) were:
- make the code more readable to distinguish between the 2 kinds of
   memory barrier.
- optimize some code to avoid a real memory barrier when not required
   (timers, virtio, ...)

Having a compiler barrier in place of a memory barrier in the code
does not really help to understand what the developper wanted to do.
In the current code we can see that the use of rte_compiler_barrier()
is ambiguous, as it need a comment to clarify the situation:

	rte_compiler_barrier();   /* rmb */

Don't you think we could fix the patch but keep its logic?

Regards,
Olivier

[1] http://lxr.free-electrons.com/source/arch/x86/include/asm/barrier.h#L81
[2] http://dpdk.org/ml/archives/dev/2014-March/001741.html



More information about the dev mailing list