[dpdk-dev] [PATCH v2] vhost: improve dirty pages logging performance

Tiwei Bie tiwei.bie at intel.com
Wed May 16 10:08:58 CEST 2018


On Wed, May 16, 2018 at 09:49:45AM +0200, Maxime Coquelin wrote:
> On 05/16/2018 08:21 AM, Tiwei Bie wrote:
> > On Tue, May 15, 2018 at 07:30:21PM +0200, Maxime Coquelin wrote:
[...]
> > > @@ -309,7 +322,15 @@ struct virtio_net {
> > >   static __rte_always_inline void
> > >   vhost_set_bit(unsigned int nr, volatile uint8_t *addr)
> > >   {
> > > +#if defined(RTE_TOOLCHAIN_GCC) && (GCC_VERSION < 70100)
> > 
> > Just curious, is there any reference about why
> > this version was chosen? Thanks!
> 
> I googled Michael reference to the LWN article [0], and they mention GCC
> 7.1.
> 
> I haven't checked by myself though whether generated code is different in
> GCC >= 7.1.
> 
> [0]: https://lwn.net/Articles/691128/

Thanks for the link!

> > 
> > > +		/*
> > > +		 * __sync_ built-ins are deprecated, but __atomic_ ones
> > > +		 * are sub-optimized in older GCC versions.
> > > +		 */
> > 
> > The indent isn't right (just need one tab here).
> Right, will fix.
> 
> > 
> > >   	__sync_fetch_and_or_8(addr, (1U << nr));
> 
> This is unrelated to this patch set, but from GCC doc [1], shouldn't
> we use __sync_fetch_and_or_1 as the size is in bytes?
> 
> [1]: https://gcc.gnu.org/onlinedocs/gcc/_005f_005fsync-Builtins.html

You are right. I didn't notice it.. Please also fix this.

> 
> > > +#else
> > > +	__atomic_fetch_or(addr, (1U << nr), __ATOMIC_RELAXED);
[...]
> > > @@ -106,6 +106,8 @@ flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq)
> > >   	rte_smp_wmb();
> > > +	vhost_log_cache_sync(dev, vq);
> > 
> > Each time we call vhost_log_cache_sync(), there
> > is already a rte_smp_wmb() which is to protect
> > the used->idx update. So maybe there is no need
> > to call rte_smp_wmb() in vhost_log_cache_sync().
> 
> Right, I can remove it in vhost_log_cache_sync(), and
> maybe add a comment there stating that a write barrier
> before calling the function is expected.

Good idea.

Best regards,
Tiwei Bie


More information about the dev mailing list