[PATCH v3 1/4] vhost: change vhost_virtqueue access lock to a read/write one

Xia, Chenbo chenbo.xia at intel.com
Thu Jun 1 03:45:16 CEST 2023


> -----Original Message-----
> From: Eelco Chaudron <echaudro at redhat.com>
> Sent: Wednesday, May 31, 2023 7:14 PM
> To: Maxime Coquelin <maxime.coquelin at redhat.com>
> Cc: Xia, Chenbo <chenbo.xia at intel.com>; david.marchand at redhat.com;
> dev at dpdk.org
> Subject: Re: [PATCH v3 1/4] vhost: change vhost_virtqueue access lock to a
> read/write one
> 
> 
> 
> On 31 May 2023, at 11:27, Maxime Coquelin wrote:
> 
> > On 5/31/23 08:37, Xia, Chenbo wrote:
> >> Hi Eelco,
> >>
> >>> -----Original Message-----
> >>> From: Eelco Chaudron <echaudro at redhat.com>
> >>> Sent: Wednesday, May 17, 2023 5:09 PM
> >>> To: maxime.coquelin at redhat.com; Xia, Chenbo <chenbo.xia at intel.com>;
> >>> david.marchand at redhat.com
> >>> Cc: dev at dpdk.org
> >>> Subject: [PATCH v3 1/4] vhost: change vhost_virtqueue access lock to a
> >>> read/write one
> >>>
> >>> This change will allow the vhost interrupt datapath handling to be
> split
> >>> between two processed without one of them holding an explicit lock.
> >>>
> >>> Signed-off-by: Eelco Chaudron <echaudro at redhat.com>
> >>> ---
> >>>   lib/eal/include/generic/rte_rwlock.h |   17 ++++++
> >>>   lib/vhost/vhost.c                    |   46 +++++++++--------
> >>>   lib/vhost/vhost.h                    |    4 +-
> >>>   lib/vhost/vhost_user.c               |   14 +++--
> >>>   lib/vhost/virtio_net.c               |   90 +++++++++++++++++-------
> -----
> >>> -----
> >>>   5 files changed, 94 insertions(+), 77 deletions(-)
> >>>
> >>> diff --git a/lib/eal/include/generic/rte_rwlock.h
> >>> b/lib/eal/include/generic/rte_rwlock.h
> >>> index 71e2d8d5f4..9e083bbc61 100644
> >>> --- a/lib/eal/include/generic/rte_rwlock.h
> >>> +++ b/lib/eal/include/generic/rte_rwlock.h
> >>> @@ -236,6 +236,23 @@ rte_rwlock_write_unlock(rte_rwlock_t *rwl)
> >>>   	__atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_WRITE,
> __ATOMIC_RELEASE);
> >>>   }
> >>>
> >>> +/**
> >>> + * Test if the write lock is taken.
> >>> + *
> >>> + * @param rwl
> >>> + *   A pointer to a rwlock structure.
> >>> + * @return
> >>> + *   1 if the write lock is currently taken; 0 otherwise.
> >>> + */
> >>> +static inline int
> >>> +rte_rwlock_write_is_locked(rte_rwlock_t *rwl)
> >>> +{
> >>> +	if (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED) & RTE_RWLOCK_WRITE)
> >>> +		return 1;
> >>> +
> >>> +	return 0;
> >>> +}
> >>> +
> >>
> >> Again we need to update release note as it's a new EAL API.
> >>
> >>>   /**
> >>>    * Try to execute critical section in a hardware memory transaction,
> if
> >>> it
> >>>    * fails or not available take a read lock
> >>> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> >>> index ef37943817..74bdbfd810 100644
> >>> --- a/lib/vhost/vhost.c
> >>> +++ b/lib/vhost/vhost.c
> >>> @@ -393,9 +393,9 @@ free_vq(struct virtio_net *dev, struct
> vhost_virtqueue
> >>> *vq)
> >>>   	else
> >>>   		rte_free(vq->shadow_used_split);
> >>>
> >>> -	rte_spinlock_lock(&vq->access_lock);
> >>> +	rte_rwlock_write_lock(&vq->access_lock);
> >>>   	vhost_free_async_mem(vq);
> >>> -	rte_spinlock_unlock(&vq->access_lock);
> >>> +	rte_rwlock_write_unlock(&vq->access_lock);
> >>>   	rte_free(vq->batch_copy_elems);
> >>>   	vhost_user_iotlb_destroy(vq);
> >>>   	rte_free(vq->log_cache);
> >>> @@ -630,7 +630,7 @@ alloc_vring_queue(struct virtio_net *dev, uint32_t
> >>> vring_idx)
> >>>
> >>>   		dev->virtqueue[i] = vq;
> >>>   		init_vring_queue(dev, vq, i);
> >>> -		rte_spinlock_init(&vq->access_lock);
> >>> +		rte_rwlock_init(&vq->access_lock);
> >>>   		vq->avail_wrap_counter = 1;
> >>>   		vq->used_wrap_counter = 1;
> >>>   		vq->signalled_used_valid = false;
> >>> @@ -1305,14 +1305,14 @@ rte_vhost_vring_call(int vid, uint16_t
> vring_idx)
> >>>   	if (!vq)
> >>>   		return -1;
> >>>
> >>> -	rte_spinlock_lock(&vq->access_lock);
> >>> +	rte_rwlock_read_lock(&vq->access_lock);
> >>>
> >>>   	if (vq_is_packed(dev))
> >>>   		vhost_vring_call_packed(dev, vq);
> >>>   	else
> >>>   		vhost_vring_call_split(dev, vq);
> >>>
> >>> -	rte_spinlock_unlock(&vq->access_lock);
> >>> +	rte_rwlock_read_unlock(&vq->access_lock);
> >>
> >> Not sure about this. vhost_ring_call_packed/split is changing some
> field in
> >> Vq. Should we use write lock here?
> >
> > I don't think so, the purpose of the access_lock is not to make the
> > datapath threads-safe, but to protect the datapath from metadata changes
> > by the control path.
> 
> Thanks Chinbo for the review, and see Maxime’s comment above. Does this
> clarify your concern/question?

Make sense to me. Thanks Eelco and Maxime!

With the release note added:

Reviewed-by: Chenbo Xia <chenbo.xia at intel.com> 

> 
> >>
> >> Thanks,
> >> Chenbo
> >>



More information about the dev mailing list