[dpdk-dev] vhost: support non multiqueue guests

Message ID 1517409743-24332-1-git-send-email-alan.dewar@att.com (mailing list archive)
State Rejected, archived
Delegated to: Maxime Coquelin
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Alan Dewar Jan. 31, 2018, 2:42 p.m. UTC
  From: Alan Dewar <alan.dewar@att.com>

Performance of vhost interfaces can be improved by having multiple
TX/RX queue-pairs.  QEMU can be told to use multiple queue-pairs for
a vhost interface when starting the guest VM.  The DPDK will also
configure multiple queue-pairs in response to requests from QEMU.

Later when the guest VM reaches the running state, it can decide to not
support the multiqueue option.   This information is passed down from
the guest VM to QEMU, and from QEMU to the DPDK, but the DPDK ignores
it.

Because the guest VM doesn't support the multiqueue option it will only
initialise the first queue-pair, and in turn the DPDK will not signal
that the vhost interface is up.

This change allows the DPDK to signal that the vhost interface is up
after only the first queue-pair is fully initialised if the guest VM
does not support the multiqueue option.

Signed-off-by: Alan Dewar <alan.dewar@att.com>
---
 lib/librte_vhost/vhost.c      |  6 +++---
 lib/librte_vhost/vhost_user.c |  8 +++++++-
 lib/librte_vhost/virtio_net.c | 12 +++++++++---
 3 files changed, 19 insertions(+), 7 deletions(-)
  

Comments

Maxime Coquelin Feb. 5, 2018, 8:42 a.m. UTC | #1
Hi Alan,

On 01/31/2018 03:42 PM, alangordondewar@gmail.com wrote:
> From: Alan Dewar<alan.dewar@att.com>
> 
> Performance of vhost interfaces can be improved by having multiple
> TX/RX queue-pairs.  QEMU can be told to use multiple queue-pairs for
> a vhost interface when starting the guest VM.  The DPDK will also
> configure multiple queue-pairs in response to requests from QEMU.
> 
> Later when the guest VM reaches the running state, it can decide to not
> support the multiqueue option.   This information is passed down from
> the guest VM to QEMU, and from QEMU to the DPDK, but the DPDK ignores
> it.
> 
> Because the guest VM doesn't support the multiqueue option it will only
> initialise the first queue-pair, and in turn the DPDK will not signal
> that the vhost interface is up.
> 
> This change allows the DPDK to signal that the vhost interface is up
> after only the first queue-pair is fully initialised if the guest VM
> does not support the multiqueue option.
> 
> Signed-off-by: Alan Dewar<alan.dewar@att.com>
> ---
>   lib/librte_vhost/vhost.c      |  6 +++---
>   lib/librte_vhost/vhost_user.c |  8 +++++++-
>   lib/librte_vhost/virtio_net.c | 12 +++++++++---
>   3 files changed, 19 insertions(+), 7 deletions(-)

We already implemented a workaround to fix this issue:
  commit e29109323595beb3884da58126ebb3b878cb66f5
Author: Maxime Coquelin <maxime.coquelin@redhat.com>
Date:   Wed Dec 13 09:51:09 2017 +0100

     vhost: destroy unused virtqueues when multiqueue not negotiated

     QEMU sends VHOST_USER_SET_VRING_CALL requests for all queues
     declared in QEMU command line before the guest is started.
     It has the effect in DPDK vhost-user backend to allocate vrings
     for all queues declared by QEMU.

     If the first driver being used does not support multiqueue,
     the device never changes to VIRTIO_DEV_RUNNING state as only
     the first queue pair is initialized. One driver impacted by
     this bug is virtio-net's iPXE driver which does not support
     VIRTIO_NET_F_MQ feature.

     It is safe to destroy unused virtqueues in SET_FEATURES request
     handler, as it is ensured the device is not in running state
     at this stage, so virtqueues aren't being processed.

     Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
     Acked-by: Laszlo Ersek <lersek@redhat.com>
     Acked-by: Yuanhan Liu <yliu@fridaylinux.org>


Could you try with latest master and confirm it solves the issue on your
side?

Cheers,
Maxime
  
Dewar, Alan Feb. 5, 2018, 9:14 a.m. UTC | #2
Hi Maxime,

Many thanks for the heads up.  I'll try out your suggested fix later today and let you know if it resolves my issue.

Regards
Alan

-----Original Message-----
From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com] 

Sent: Monday, February 05, 2018 8:42 AM
To: alangordondewar@gmail.com; dev@dpdk.org
Cc: Alan Dewar <alan.dewar@att.com>
Subject: Re: [dpdk-dev] [PATCH] vhost: support non multiqueue guests

Hi Alan,

On 01/31/2018 03:42 PM, alangordondewar@gmail.com wrote:
> From: Alan Dewar<alan.dewar@att.com>

> 

> Performance of vhost interfaces can be improved by having multiple 

> TX/RX queue-pairs.  QEMU can be told to use multiple queue-pairs for a 

> vhost interface when starting the guest VM.  The DPDK will also 

> configure multiple queue-pairs in response to requests from QEMU.

> 

> Later when the guest VM reaches the running state, it can decide to not

> support the multiqueue option.   This information is passed down from

> the guest VM to QEMU, and from QEMU to the DPDK, but the DPDK ignores 

> it.

> 

> Because the guest VM doesn't support the multiqueue option it will 

> only initialise the first queue-pair, and in turn the DPDK will not 

> signal that the vhost interface is up.

> 

> This change allows the DPDK to signal that the vhost interface is up 

> after only the first queue-pair is fully initialised if the guest VM 

> does not support the multiqueue option.

> 

> Signed-off-by: Alan Dewar<alan.dewar@att.com>

> ---

>   lib/librte_vhost/vhost.c      |  6 +++---

>   lib/librte_vhost/vhost_user.c |  8 +++++++-

>   lib/librte_vhost/virtio_net.c | 12 +++++++++---

>   3 files changed, 19 insertions(+), 7 deletions(-)


We already implemented a workaround to fix this issue:
  commit e29109323595beb3884da58126ebb3b878cb66f5
Author: Maxime Coquelin <maxime.coquelin@redhat.com>
Date:   Wed Dec 13 09:51:09 2017 +0100

     vhost: destroy unused virtqueues when multiqueue not negotiated

     QEMU sends VHOST_USER_SET_VRING_CALL requests for all queues
     declared in QEMU command line before the guest is started.
     It has the effect in DPDK vhost-user backend to allocate vrings
     for all queues declared by QEMU.

     If the first driver being used does not support multiqueue,
     the device never changes to VIRTIO_DEV_RUNNING state as only
     the first queue pair is initialized. One driver impacted by
     this bug is virtio-net's iPXE driver which does not support
     VIRTIO_NET_F_MQ feature.

     It is safe to destroy unused virtqueues in SET_FEATURES request
     handler, as it is ensured the device is not in running state
     at this stage, so virtqueues aren't being processed.

     Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>

     Acked-by: Laszlo Ersek <lersek@redhat.com>

     Acked-by: Yuanhan Liu <yliu@fridaylinux.org>



Could you try with latest master and confirm it solves the issue on your side?

Cheers,
Maxime
  
Alan Dewar Feb. 5, 2018, 10:18 a.m. UTC | #3
Hi Maxime,

Just confirming that the commit that you suggested fixes the problem
that I was seeing.

Many thanks for your help.

Regards
Alan

On Mon, Feb 5, 2018 at 8:42 AM, Maxime Coquelin
<maxime.coquelin@redhat.com> wrote:
> Hi Alan,
>
> On 01/31/2018 03:42 PM, alangordondewar@gmail.com wrote:
>>
>> From: Alan Dewar<alan.dewar@att.com>
>>
>> Performance of vhost interfaces can be improved by having multiple
>> TX/RX queue-pairs.  QEMU can be told to use multiple queue-pairs for
>> a vhost interface when starting the guest VM.  The DPDK will also
>> configure multiple queue-pairs in response to requests from QEMU.
>>
>> Later when the guest VM reaches the running state, it can decide to not
>> support the multiqueue option.   This information is passed down from
>> the guest VM to QEMU, and from QEMU to the DPDK, but the DPDK ignores
>> it.
>>
>> Because the guest VM doesn't support the multiqueue option it will only
>> initialise the first queue-pair, and in turn the DPDK will not signal
>> that the vhost interface is up.
>>
>> This change allows the DPDK to signal that the vhost interface is up
>> after only the first queue-pair is fully initialised if the guest VM
>> does not support the multiqueue option.
>>
>> Signed-off-by: Alan Dewar<alan.dewar@att.com>
>> ---
>>   lib/librte_vhost/vhost.c      |  6 +++---
>>   lib/librte_vhost/vhost_user.c |  8 +++++++-
>>   lib/librte_vhost/virtio_net.c | 12 +++++++++---
>>   3 files changed, 19 insertions(+), 7 deletions(-)
>
>
> We already implemented a workaround to fix this issue:
>  commit e29109323595beb3884da58126ebb3b878cb66f5
> Author: Maxime Coquelin <maxime.coquelin@redhat.com>
> Date:   Wed Dec 13 09:51:09 2017 +0100
>
>     vhost: destroy unused virtqueues when multiqueue not negotiated
>
>     QEMU sends VHOST_USER_SET_VRING_CALL requests for all queues
>     declared in QEMU command line before the guest is started.
>     It has the effect in DPDK vhost-user backend to allocate vrings
>     for all queues declared by QEMU.
>
>     If the first driver being used does not support multiqueue,
>     the device never changes to VIRTIO_DEV_RUNNING state as only
>     the first queue pair is initialized. One driver impacted by
>     this bug is virtio-net's iPXE driver which does not support
>     VIRTIO_NET_F_MQ feature.
>
>     It is safe to destroy unused virtqueues in SET_FEATURES request
>     handler, as it is ensured the device is not in running state
>     at this stage, so virtqueues aren't being processed.
>
>     Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>     Acked-by: Laszlo Ersek <lersek@redhat.com>
>     Acked-by: Yuanhan Liu <yliu@fridaylinux.org>
>
>
> Could you try with latest master and confirm it solves the issue on your
> side?
>
> Cheers,
> Maxime
  

Patch

diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
index 1dd9adb..c952756 100644
--- a/lib/librte_vhost/vhost.c
+++ b/lib/librte_vhost/vhost.c
@@ -397,7 +397,7 @@  rte_vhost_get_queue_num(int vid)
 	if (dev == NULL)
 		return 0;
 
-	return dev->nr_vring / 2;
+	return (dev->features & VIRTIO_NET_F_MQ) ? (dev->nr_vring / 2) : 1;
 }
 
 uint16_t
@@ -408,7 +408,7 @@  rte_vhost_get_vring_num(int vid)
 	if (dev == NULL)
 		return 0;
 
-	return dev->nr_vring;
+	return (dev->features & VIRTIO_NET_F_MQ) ? dev->nr_vring : 2;
 }
 
 int
@@ -590,7 +590,7 @@  rte_vhost_rx_queue_count(int vid, uint16_t qid)
 	if (dev == NULL)
 		return 0;
 
-	if (unlikely(qid >= dev->nr_vring || (qid & 1) == 0)) {
+	if (unlikely(qid >= rte_vhost_get_vring_num(vid) || (qid & 1) == 0)) {
 		RTE_LOG(ERR, VHOST_DATA, "(%d) %s: invalid virtqueue idx %d.\n",
 			dev->vid, __func__, qid);
 		return 0;
diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
index 1dd1a61..d709cb6 100644
--- a/lib/librte_vhost/vhost_user.c
+++ b/lib/librte_vhost/vhost_user.c
@@ -773,11 +773,17 @@  virtio_is_ready(struct virtio_net *dev)
 {
 	struct vhost_virtqueue *vq;
 	uint32_t i;
+	uint32_t vrings_init;
 
 	if (dev->nr_vring == 0)
 		return 0;
 
-	for (i = 0; i < dev->nr_vring; i++) {
+	/*
+	 * If the guest VM doesn't support the multiqueue feature, only the
+	 * first two vrings will be fully initialised.
+	 */
+	vrings_init = (dev->features & VIRTIO_NET_F_MQ) ? dev->nr_vring : 2;
+	for (i = 0; i < vrings_init; i++) {
 		vq = dev->virtqueue[i];
 
 		if (!vq_is_ready(vq))
diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index edfab3b..ca48499 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -294,9 +294,11 @@  virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
 	struct vring_desc *descs;
 	uint16_t used_idx;
 	uint32_t i, sz;
+	uint16_t nr_vring;
 
 	LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
-	if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) {
+	nr_vring = rte_vhost_get_vring_num(dev->vid);
+	if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, nr_vring))) {
 		RTE_LOG(ERR, VHOST_DATA, "(%d) %s: invalid virtqueue idx %d.\n",
 			dev->vid, __func__, queue_id);
 		return 0;
@@ -619,9 +621,11 @@  virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id,
 	uint16_t num_buffers;
 	struct buf_vector buf_vec[BUF_VECTOR_MAX];
 	uint16_t avail_head;
+	uint16_t nr_vring;
 
 	LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
-	if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) {
+	nr_vring = rte_vhost_get_vring_num(dev->vid);
+	if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, nr_vring))) {
 		RTE_LOG(ERR, VHOST_DATA, "(%d) %s: invalid virtqueue idx %d.\n",
 			dev->vid, __func__, queue_id);
 		return 0;
@@ -1123,12 +1127,14 @@  rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 	uint32_t i = 0;
 	uint16_t free_entries;
 	uint16_t avail_idx;
+	uint16_t nr_vring;
 
 	dev = get_device(vid);
 	if (!dev)
 		return 0;
 
-	if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) {
+	nr_vring = rte_vhost_get_vring_num(dev->vid);
+	if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, nr_vring))) {
 		RTE_LOG(ERR, VHOST_DATA, "(%d) %s: invalid virtqueue idx %d.\n",
 			dev->vid, __func__, queue_id);
 		return 0;