vhost: return -EAGAIN during unregistering vhost if it is busy.

Message ID 1584007039-12437-1-git-send-email-wangzhike@jd.com (mailing list archive)
State Rejected, archived
Delegated to: Maxime Coquelin
Headers
Series vhost: return -EAGAIN during unregistering vhost if it is busy. |

Checks

Context Check Description
ci/checkpatch warning coding style issues
ci/Intel-compilation success Compilation OK
ci/iol-testing success Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/travis-robot success Travis build: passed

Commit Message

王志克 March 12, 2020, 9:57 a.m. UTC
  The vhost_user_read_cb() and rte_vhost_driver_unregister()
can be called at the same time by 2 threads, and may lead to deadlock.
Eg thread1 calls vhost_user_read_cb()->vhost_user_get_vring_base()->destroy_device(),
then thread2 calls rte_vhost_driver_unregister(), and will retry the fdset_try_del() in loop.

Some application implements destroy_device() as a blocking function, eg
OVS calls ovsrcu_synchronize() insides destroy_device(). As a result,
thread1(eg vhost_events) is blocked to wait quiesce of thread2(eg ovs-vswitchd),
and thread2 is in a loop to wait thread1 to give up the use of the vhost fd,
then leads to deadlock.

It is better to return -EAGAIN to application, who will decide how to handle
(eg OVS can call ovsrcu_quiesce() and then retry).

Signed-off-by: Zhike Wang <wangzhike@jd.com>
---
 lib/librte_vhost/rte_vhost.h | 4 +++-
 lib/librte_vhost/socket.c    | 8 ++++----
 2 files changed, 7 insertions(+), 5 deletions(-)
  

Comments

王志克 March 18, 2020, 3:31 a.m. UTC | #1
Involve openvswitch group since this fix is highly coupled with OVS.
welcome comment.
At 2020-03-12 17:57:19, "Zhike Wang" <wangzhike@jd.com> wrote:
>The vhost_user_read_cb() and rte_vhost_driver_unregister()
>can be called at the same time by 2 threads, and may lead to deadlock.
>Eg thread1 calls vhost_user_read_cb()->vhost_user_get_vring_base()->destroy_device(),
>then thread2 calls rte_vhost_driver_unregister(), and will retry the fdset_try_del() in loop.
>
>Some application implements destroy_device() as a blocking function, eg
>OVS calls ovsrcu_synchronize() insides destroy_device(). As a result,
>thread1(eg vhost_events) is blocked to wait quiesce of thread2(eg ovs-vswitchd),
>and thread2 is in a loop to wait thread1 to give up the use of the vhost fd,
>then leads to deadlock.
>
>It is better to return -EAGAIN to application, who will decide how to handle
>(eg OVS can call ovsrcu_quiesce() and then retry).
>
>Signed-off-by: Zhike Wang <wangzhike@jd.com>
>---
> lib/librte_vhost/rte_vhost.h | 4 +++-
> lib/librte_vhost/socket.c    | 8 ++++----
> 2 files changed, 7 insertions(+), 5 deletions(-)
>
>diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h
>index c7b619a..276db11 100644
>--- a/lib/librte_vhost/rte_vhost.h
>+++ b/lib/librte_vhost/rte_vhost.h
>@@ -389,7 +389,9 @@ void rte_vhost_log_used_vring(int vid, uint16_t vring_idx,
>  */
> int rte_vhost_driver_register(const char *path, uint64_t flags);
> 
>-/* Unregister vhost driver. This is only meaningful to vhost user. */
>+/* Unregister vhost driver. This is only meaningful to vhost user.
>+ * Return -EAGAIN if device is busy, and leave it to be handled by application.
>+ */
> int rte_vhost_driver_unregister(const char *path);
> 
> /**
>diff --git a/lib/librte_vhost/socket.c b/lib/librte_vhost/socket.c
>index 7c80121..a75a3f6 100644
>--- a/lib/librte_vhost/socket.c
>+++ b/lib/librte_vhost/socket.c
>@@ -1027,7 +1027,8 @@ struct vhost_user_reconnect_list {
> }
> 
> /**
>- * Unregister the specified vhost socket
>+ * Unregister the specified vhost socket.
>+ * Return -EAGAIN if device is busy, and leave it to be handled by application.
>  */
> int
> rte_vhost_driver_unregister(const char *path)
>@@ -1039,7 +1040,6 @@ struct vhost_user_reconnect_list {
> 	if (path == NULL)
> 		return -1;
> 
>-again:
> 	pthread_mutex_lock(&vhost_user.mutex);
> 
> 	for (i = 0; i < vhost_user.vsocket_cnt; i++) {
>@@ -1063,7 +1063,7 @@ struct vhost_user_reconnect_list {
> 					pthread_mutex_unlock(
> 							&vsocket->conn_mutex);
> 					pthread_mutex_unlock(&vhost_user.mutex);
>-					goto again;
>+					return -EAGAIN;
> 				}
> 
> 				VHOST_LOG_CONFIG(INFO,
>@@ -1085,7 +1085,7 @@ struct vhost_user_reconnect_list {
> 				if (fdset_try_del(&vhost_user.fdset,
> 						vsocket->socket_fd) == -1) {
> 					pthread_mutex_unlock(&vhost_user.mutex);
>-					goto again;
>+					return -EAGAIN;
> 				}
> 
> 				close(vsocket->socket_fd);
>-- 
>1.8.3.1
>
  
Maxime Coquelin April 27, 2020, 8:09 a.m. UTC | #2
On 3/18/20 4:31 AM, 王志克 wrote:
> Involve openvswitch group since this fix is highly coupled with OVS.
> welcome comment.
> At 2020-03-12 17:57:19, "Zhike Wang" <wangzhike@jd.com> wrote:
>> The vhost_user_read_cb() and rte_vhost_driver_unregister()
>> can be called at the same time by 2 threads, and may lead to deadlock.
>> Eg thread1 calls vhost_user_read_cb()->vhost_user_get_vring_base()->destroy_device(),
>> then thread2 calls rte_vhost_driver_unregister(), and will retry the fdset_try_del() in loop.
>>
>> Some application implements destroy_device() as a blocking function, eg
>> OVS calls ovsrcu_synchronize() insides destroy_device(). As a result,
>> thread1(eg vhost_events) is blocked to wait quiesce of thread2(eg ovs-vswitchd),
>> and thread2 is in a loop to wait thread1 to give up the use of the vhost fd,
>> then leads to deadlock.
>>
>> It is better to return -EAGAIN to application, who will decide how to handle
>> (eg OVS can call ovsrcu_quiesce() and then retry).
>>
>> Signed-off-by: Zhike Wang <wangzhike@jd.com>
>> ---
>> lib/librte_vhost/rte_vhost.h | 4 +++-
>> lib/librte_vhost/socket.c    | 8 ++++----
>> 2 files changed, 7 insertions(+), 5 deletions(-)


Isn't it fixed with below commit that landed into DPDK v20.02?

commit 5efb18e85f7fdb436d3e56591656051c16802066
Author: Maxime Coquelin <maxime.coquelin@redhat.com>
Date:   Tue Jan 14 19:53:57 2020 +0100

    vhost: fix deadlock on port deletion

    If the vhost-user application (e.g. OVS) deletes the vhost-user
    port while Qemu sends a vhost-user request, a deadlock can
    happen if the request handler tries to acquire vhost-user's
    global mutex, which is also locked by the vhost-user port
    deletion API (rte_vhost_driver_unregister).

    This patch prevents the deadlock by making
    rte_vhost_driver_unregister() to release the mutex and try
    again if a request is being handled to give a chance to
    the request handler to complete.

    Fixes: 8b4b949144b8 ("vhost: fix dead lock on closing in server mode")
    Fixes: 5fbb3941da9f ("vhost: introduce driver features related APIs")
    Cc: stable@dpdk.org

    Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
    Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
    Acked-by: Eelco Chaudron <echaudro@redhat.com>

>> diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h
>> index c7b619a..276db11 100644
>> --- a/lib/librte_vhost/rte_vhost.h
>> +++ b/lib/librte_vhost/rte_vhost.h
>> @@ -389,7 +389,9 @@ void rte_vhost_log_used_vring(int vid, uint16_t vring_idx,
>>  */
>> int rte_vhost_driver_register(const char *path, uint64_t flags);
>>
>> -/* Unregister vhost driver. This is only meaningful to vhost user. */
>> +/* Unregister vhost driver. This is only meaningful to vhost user.
>> + * Return -EAGAIN if device is busy, and leave it to be handled by application.
>> + */
>> int rte_vhost_driver_unregister(const char *path);
>>
>> /**
>> diff --git a/lib/librte_vhost/socket.c b/lib/librte_vhost/socket.c
>> index 7c80121..a75a3f6 100644
>> --- a/lib/librte_vhost/socket.c
>> +++ b/lib/librte_vhost/socket.c
>> @@ -1027,7 +1027,8 @@ struct vhost_user_reconnect_list {
>> }
>>
>> /**
>> - * Unregister the specified vhost socket
>> + * Unregister the specified vhost socket.
>> + * Return -EAGAIN if device is busy, and leave it to be handled by application.
>>  */
>> int
>> rte_vhost_driver_unregister(const char *path)
>> @@ -1039,7 +1040,6 @@ struct vhost_user_reconnect_list {
>> 	if (path == NULL)
>> 		return -1;
>>
>> -again:
>> 	pthread_mutex_lock(&vhost_user.mutex);
>>
>> 	for (i = 0; i < vhost_user.vsocket_cnt; i++) {
>> @@ -1063,7 +1063,7 @@ struct vhost_user_reconnect_list {
>> 					pthread_mutex_unlock(
>> 							&vsocket->conn_mutex);
>> 					pthread_mutex_unlock(&vhost_user.mutex);
>> -					goto again;
>> +					return -EAGAIN;
>> 				}
>>
>> 				VHOST_LOG_CONFIG(INFO,
>> @@ -1085,7 +1085,7 @@ struct vhost_user_reconnect_list {
>> 				if (fdset_try_del(&vhost_user.fdset,
>> 						vsocket->socket_fd) == -1) {
>> 					pthread_mutex_unlock(&vhost_user.mutex);
>> -					goto again;
>> +					return -EAGAIN;
>> 				}
>>
>> 				close(vsocket->socket_fd);
>> -- 
>> 1.8.3.1
>>
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
  
王志克 May 6, 2020, 2:39 a.m. UTC | #3
NO, it is different issue.


The current  deadlock  mentioned in this patch is caused by some blocking function (like ovsrcu_synchronize) in application (like OVS).
In this patch, the application is needed  to break  the logical deadlock. 

















At 2020-04-27 16:09:31, "Maxime Coquelin" <maxime.coquelin@redhat.com> wrote:
>
>
>On 3/18/20 4:31 AM, 王志克 wrote:
>> Involve openvswitch group since this fix is highly coupled with OVS.
>> welcome comment.
>> At 2020-03-12 17:57:19, "Zhike Wang" <wangzhike@jd.com> wrote:
>>> The vhost_user_read_cb() and rte_vhost_driver_unregister()
>>> can be called at the same time by 2 threads, and may lead to deadlock.
>>> Eg thread1 calls vhost_user_read_cb()->vhost_user_get_vring_base()->destroy_device(),
>>> then thread2 calls rte_vhost_driver_unregister(), and will retry the fdset_try_del() in loop.
>>>
>>> Some application implements destroy_device() as a blocking function, eg
>>> OVS calls ovsrcu_synchronize() insides destroy_device(). As a result,
>>> thread1(eg vhost_events) is blocked to wait quiesce of thread2(eg ovs-vswitchd),
>>> and thread2 is in a loop to wait thread1 to give up the use of the vhost fd,
>>> then leads to deadlock.
>>>
>>> It is better to return -EAGAIN to application, who will decide how to handle
>>> (eg OVS can call ovsrcu_quiesce() and then retry).
>>>
>>> Signed-off-by: Zhike Wang <wangzhike@jd.com>
>>> ---
>>> lib/librte_vhost/rte_vhost.h | 4 +++-
>>> lib/librte_vhost/socket.c    | 8 ++++----
>>> 2 files changed, 7 insertions(+), 5 deletions(-)
>
>
>Isn't it fixed with below commit that landed into DPDK v20.02?
>
>commit 5efb18e85f7fdb436d3e56591656051c16802066
>Author: Maxime Coquelin <maxime.coquelin@redhat.com>
>Date:   Tue Jan 14 19:53:57 2020 +0100
>
>    vhost: fix deadlock on port deletion
>
>    If the vhost-user application (e.g. OVS) deletes the vhost-user
>    port while Qemu sends a vhost-user request, a deadlock can
>    happen if the request handler tries to acquire vhost-user's
>    global mutex, which is also locked by the vhost-user port
>    deletion API (rte_vhost_driver_unregister).
>
>    This patch prevents the deadlock by making
>    rte_vhost_driver_unregister() to release the mutex and try
>    again if a request is being handled to give a chance to
>    the request handler to complete.
>
>    Fixes: 8b4b949144b8 ("vhost: fix dead lock on closing in server mode")
>    Fixes: 5fbb3941da9f ("vhost: introduce driver features related APIs")
>    Cc: stable@dpdk.org
>
>    Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>    Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
>    Acked-by: Eelco Chaudron <echaudro@redhat.com>
>
>>> diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h
>>> index c7b619a..276db11 100644
>>> --- a/lib/librte_vhost/rte_vhost.h
>>> +++ b/lib/librte_vhost/rte_vhost.h
>>> @@ -389,7 +389,9 @@ void rte_vhost_log_used_vring(int vid, uint16_t vring_idx,
>>>  */
>>> int rte_vhost_driver_register(const char *path, uint64_t flags);
>>>
>>> -/* Unregister vhost driver. This is only meaningful to vhost user. */
>>> +/* Unregister vhost driver. This is only meaningful to vhost user.
>>> + * Return -EAGAIN if device is busy, and leave it to be handled by application.
>>> + */
>>> int rte_vhost_driver_unregister(const char *path);
>>>
>>> /**
>>> diff --git a/lib/librte_vhost/socket.c b/lib/librte_vhost/socket.c
>>> index 7c80121..a75a3f6 100644
>>> --- a/lib/librte_vhost/socket.c
>>> +++ b/lib/librte_vhost/socket.c
>>> @@ -1027,7 +1027,8 @@ struct vhost_user_reconnect_list {
>>> }
>>>
>>> /**
>>> - * Unregister the specified vhost socket
>>> + * Unregister the specified vhost socket.
>>> + * Return -EAGAIN if device is busy, and leave it to be handled by application.
>>>  */
>>> int
>>> rte_vhost_driver_unregister(const char *path)
>>> @@ -1039,7 +1040,6 @@ struct vhost_user_reconnect_list {
>>> 	if (path == NULL)
>>> 		return -1;
>>>
>>> -again:
>>> 	pthread_mutex_lock(&vhost_user.mutex);
>>>
>>> 	for (i = 0; i < vhost_user.vsocket_cnt; i++) {
>>> @@ -1063,7 +1063,7 @@ struct vhost_user_reconnect_list {
>>> 					pthread_mutex_unlock(
>>> 							&vsocket->conn_mutex);
>>> 					pthread_mutex_unlock(&vhost_user.mutex);
>>> -					goto again;
>>> +					return -EAGAIN;
>>> 				}
>>>
>>> 				VHOST_LOG_CONFIG(INFO,
>>> @@ -1085,7 +1085,7 @@ struct vhost_user_reconnect_list {
>>> 				if (fdset_try_del(&vhost_user.fdset,
>>> 						vsocket->socket_fd) == -1) {
>>> 					pthread_mutex_unlock(&vhost_user.mutex);
>>> -					goto again;
>>> +					return -EAGAIN;
>>> 				}
>>>
>>> 				close(vsocket->socket_fd);
>>> -- 
>>> 1.8.3.1
>>>
>> _______________________________________________
>> dev mailing list
>> dev@openvswitch.org
>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>>
  
Maxime Coquelin May 6, 2020, 7:53 a.m. UTC | #4
Hi Zhike,

On 5/6/20 4:39 AM, 王志克 wrote:
> NO, it is different issue.
> 
> The current  deadlock  mentioned in this patch is caused by some
> blocking function (like ovsrcu_synchronize) in application (like OVS).
> In this patch, the application is needed  to break  the logical deadlock. 

Ok, so we need either a non-blocking rte_vhost_driver_unregister() in
DPDK, or a non-blocking .destroy_device() in OVS. Thanks for the
clarification, I misread your original commit message.

The problem with your patch is that you break the DPDK ABI, as current
unregistering API is supposed to be blocking. Meaning, that could not be
backported to older stable branches.

What we can do on DPDK side is to provide a new non-blocking API for
unregistering, or do

Any chance there is a way for OVS to prevent that?

Thanks,
Maxime

> 
> 
> 
> 
> 
> 
> At 2020-04-27 16:09:31, "Maxime Coquelin" <maxime.coquelin@redhat.com> wrote:
>>
>>
>>On 3/18/20 4:31 AM, 王志克 wrote:
>>> Involve openvswitch group since this fix is highly coupled with OVS.
>>> welcome comment.
>>> At 2020-03-12 17:57:19, "Zhike Wang" <wangzhike@jd.com> wrote:
>>>> The vhost_user_read_cb() and rte_vhost_driver_unregister()
>>>> can be called at the same time by 2 threads, and may lead to deadlock.
>>>> Eg thread1 calls vhost_user_read_cb()->vhost_user_get_vring_base()->destroy_device(),
>>>> then thread2 calls rte_vhost_driver_unregister(), and will retry the fdset_try_del() in loop.
>>>>
>>>> Some application implements destroy_device() as a blocking function, eg
>>>> OVS calls ovsrcu_synchronize() insides destroy_device(). As a result,
>>>> thread1(eg vhost_events) is blocked to wait quiesce of thread2(eg ovs-vswitchd),
>>>> and thread2 is in a loop to wait thread1 to give up the use of the vhost fd,
>>>> then leads to deadlock.
>>>>
>>>> It is better to return -EAGAIN to application, who will decide how to handle
>>>> (eg OVS can call ovsrcu_quiesce() and then retry).
>>>>
>>>> Signed-off-by: Zhike Wang <wangzhike@jd.com>
>>>> ---
>>>> lib/librte_vhost/rte_vhost.h | 4 +++-
>>>> lib/librte_vhost/socket.c    | 8 ++++----
>>>> 2 files changed, 7 insertions(+), 5 deletions(-)
>>
>>
>>Isn't it fixed with below commit that landed into DPDK v20.02?
>>
>>commit 5efb18e85f7fdb436d3e56591656051c16802066
>>Author: Maxime Coquelin <maxime.coquelin@redhat.com>
>>Date:   Tue Jan 14 19:53:57 2020 +0100
>>
>>    vhost: fix deadlock on port deletion
>>
>>    If the vhost-user application (e.g. OVS) deletes the vhost-user
>>    port while Qemu sends a vhost-user request, a deadlock can
>>    happen if the request handler tries to acquire vhost-user's
>>    global mutex, which is also locked by the vhost-user port
>>    deletion API (rte_vhost_driver_unregister).
>>
>>    This patch prevents the deadlock by making
>>    rte_vhost_driver_unregister() to release the mutex and try
>>    again if a request is being handled to give a chance to
>>    the request handler to complete.
>>
>>    Fixes: 8b4b949144b8 ("vhost: fix dead lock on closing in server mode")
>>    Fixes: 5fbb3941da9f ("vhost: introduce driver features related APIs")
>>    Cc: stable@dpdk.org
>>
>>    Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>>    Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
>>    Acked-by: Eelco Chaudron <echaudro@redhat.com>
>>
>>>> diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h
>>>> index c7b619a..276db11 100644
>>>> --- a/lib/librte_vhost/rte_vhost.h
>>>> +++ b/lib/librte_vhost/rte_vhost.h
>>>> @@ -389,7 +389,9 @@ void rte_vhost_log_used_vring(int vid, uint16_t vring_idx,
>>>>  */
>>>> int rte_vhost_driver_register(const char *path, uint64_t flags);
>>>>
>>>> -/* Unregister vhost driver. This is only meaningful to vhost user. */
>>>> +/* Unregister vhost driver. This is only meaningful to vhost user.
>>>> + * Return -EAGAIN if device is busy, and leave it to be handled by application.
>>>> + */
>>>> int rte_vhost_driver_unregister(const char *path);
>>>>
>>>> /**
>>>> diff --git a/lib/librte_vhost/socket.c b/lib/librte_vhost/socket.c
>>>> index 7c80121..a75a3f6 100644
>>>> --- a/lib/librte_vhost/socket.c
>>>> +++ b/lib/librte_vhost/socket.c
>>>> @@ -1027,7 +1027,8 @@ struct vhost_user_reconnect_list {
>>>> }
>>>>
>>>> /**
>>>> - * Unregister the specified vhost socket
>>>> + * Unregister the specified vhost socket.
>>>> + * Return -EAGAIN if device is busy, and leave it to be handled by application.
>>>>  */
>>>> int
>>>> rte_vhost_driver_unregister(const char *path)
>>>> @@ -1039,7 +1040,6 @@ struct vhost_user_reconnect_list {
>>>> 	if (path == NULL)
>>>> 		return -1;
>>>>
>>>> -again:
>>>> 	pthread_mutex_lock(&vhost_user.mutex);
>>>>
>>>> 	for (i = 0; i < vhost_user.vsocket_cnt; i++) {
>>>> @@ -1063,7 +1063,7 @@ struct vhost_user_reconnect_list {
>>>> 					pthread_mutex_unlock(
>>>> 							&vsocket->conn_mutex);
>>>> 					pthread_mutex_unlock(&vhost_user.mutex);
>>>> -					goto again;
>>>> +					return -EAGAIN;
>>>> 				}
>>>>
>>>> 				VHOST_LOG_CONFIG(INFO,
>>>> @@ -1085,7 +1085,7 @@ struct vhost_user_reconnect_list {
>>>> 				if (fdset_try_del(&vhost_user.fdset,
>>>> 						vsocket->socket_fd) == -1) {
>>>> 					pthread_mutex_unlock(&vhost_user.mutex);
>>>> -					goto again;
>>>> +					return -EAGAIN;
>>>> 				}
>>>>
>>>> 				close(vsocket->socket_fd);
>>>> -- 
>>>> 1.8.3.1
>>>>
>>> _______________________________________________
>>> dev mailing list
>>> dev@openvswitch.org
>>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>>> 
> 
> 
> 
>  
>
  

Patch

diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h
index c7b619a..276db11 100644
--- a/lib/librte_vhost/rte_vhost.h
+++ b/lib/librte_vhost/rte_vhost.h
@@ -389,7 +389,9 @@  void rte_vhost_log_used_vring(int vid, uint16_t vring_idx,
  */
 int rte_vhost_driver_register(const char *path, uint64_t flags);
 
-/* Unregister vhost driver. This is only meaningful to vhost user. */
+/* Unregister vhost driver. This is only meaningful to vhost user.
+ * Return -EAGAIN if device is busy, and leave it to be handled by application.
+ */
 int rte_vhost_driver_unregister(const char *path);
 
 /**
diff --git a/lib/librte_vhost/socket.c b/lib/librte_vhost/socket.c
index 7c80121..a75a3f6 100644
--- a/lib/librte_vhost/socket.c
+++ b/lib/librte_vhost/socket.c
@@ -1027,7 +1027,8 @@  struct vhost_user_reconnect_list {
 }
 
 /**
- * Unregister the specified vhost socket
+ * Unregister the specified vhost socket.
+ * Return -EAGAIN if device is busy, and leave it to be handled by application.
  */
 int
 rte_vhost_driver_unregister(const char *path)
@@ -1039,7 +1040,6 @@  struct vhost_user_reconnect_list {
 	if (path == NULL)
 		return -1;
 
-again:
 	pthread_mutex_lock(&vhost_user.mutex);
 
 	for (i = 0; i < vhost_user.vsocket_cnt; i++) {
@@ -1063,7 +1063,7 @@  struct vhost_user_reconnect_list {
 					pthread_mutex_unlock(
 							&vsocket->conn_mutex);
 					pthread_mutex_unlock(&vhost_user.mutex);
-					goto again;
+					return -EAGAIN;
 				}
 
 				VHOST_LOG_CONFIG(INFO,
@@ -1085,7 +1085,7 @@  struct vhost_user_reconnect_list {
 				if (fdset_try_del(&vhost_user.fdset,
 						vsocket->socket_fd) == -1) {
 					pthread_mutex_unlock(&vhost_user.mutex);
-					goto again;
+					return -EAGAIN;
 				}
 
 				close(vsocket->socket_fd);