[dpdk-dev] vhost: remove a hack on queue allocation

Message ID 1488435367-22170-1-git-send-email-yuanhan.liu@linux.intel.com (mailing list archive)
State Accepted, archived
Delegated to: Yuanhan Liu
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Yuanhan Liu March 2, 2017, 6:16 a.m. UTC
  We used to allocate queues based on the index from SET_VRING_CALL
request: if corresponding queue hasn't been allocated, allocate it.

Though it's pratically right (it's the first per-vring request we
will get from QEMU for vhost-user negotiation), but it's not technically
right: it's not documented in the vhost-user spec that it will always
be the first per-vring request. For example, SET_VRING_ADDR could also
be the first per-vring request.

Thus, we should not depend the SET_VRING_CALL on queue allocation.
Instead, we could catch all the per-vring messages at the entrance of
request handler, and allocate one if it hasn't been allocated before.

By that, we could remove a hack.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---

v2: add missing break
---
 lib/librte_vhost/vhost_user.c | 61 ++++++++++++++++++++++++++++++++++---------
 1 file changed, 48 insertions(+), 13 deletions(-)
  

Comments

Maxime Coquelin March 22, 2017, 8:41 a.m. UTC | #1
On 03/02/2017 07:16 AM, Yuanhan Liu wrote:
> We used to allocate queues based on the index from SET_VRING_CALL
> request: if corresponding queue hasn't been allocated, allocate it.
>
> Though it's pratically right (it's the first per-vring request we
> will get from QEMU for vhost-user negotiation), but it's not technically
> right: it's not documented in the vhost-user spec that it will always
> be the first per-vring request. For example, SET_VRING_ADDR could also
> be the first per-vring request.
>
> Thus, we should not depend the SET_VRING_CALL on queue allocation.
> Instead, we could catch all the per-vring messages at the entrance of
> request handler, and allocate one if it hasn't been allocated before.
>
> By that, we could remove a hack.
>
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> ---
>
> v2: add missing break
> ---
>  lib/librte_vhost/vhost_user.c | 61 ++++++++++++++++++++++++++++++++++---------
>  1 file changed, 48 insertions(+), 13 deletions(-)


Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime
  
Yuanhan Liu March 22, 2017, 8:56 a.m. UTC | #2
On Wed, Mar 22, 2017 at 09:41:07AM +0100, Maxime Coquelin wrote:
> 
> 
> On 03/02/2017 07:16 AM, Yuanhan Liu wrote:
> >We used to allocate queues based on the index from SET_VRING_CALL
> >request: if corresponding queue hasn't been allocated, allocate it.
> >
> >Though it's pratically right (it's the first per-vring request we
> >will get from QEMU for vhost-user negotiation), but it's not technically
> >right: it's not documented in the vhost-user spec that it will always
> >be the first per-vring request. For example, SET_VRING_ADDR could also
> >be the first per-vring request.
> >
> >Thus, we should not depend the SET_VRING_CALL on queue allocation.
> >Instead, we could catch all the per-vring messages at the entrance of
> >request handler, and allocate one if it hasn't been allocated before.
> >
> >By that, we could remove a hack.
> >
> >Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> >---
> >
> >v2: add missing break
> >---
> > lib/librte_vhost/vhost_user.c | 61 ++++++++++++++++++++++++++++++++++---------
> > 1 file changed, 48 insertions(+), 13 deletions(-)
> 
> 
> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks.

Applied to dpdk-next-virtio.

	--yliu
  

Patch

diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
index cb2156a..8433a54 100644
--- a/lib/librte_vhost/vhost_user.c
+++ b/lib/librte_vhost/vhost_user.c
@@ -635,7 +635,6 @@  vhost_user_set_vring_call(struct virtio_net *dev, struct VhostUserMsg *pmsg)
 {
 	struct vhost_vring_file file;
 	struct vhost_virtqueue *vq;
-	uint32_t cur_qp_idx;
 
 	file.index = pmsg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
 	if (pmsg->payload.u64 & VHOST_USER_VRING_NOFD_MASK)
@@ -645,19 +644,7 @@  vhost_user_set_vring_call(struct virtio_net *dev, struct VhostUserMsg *pmsg)
 	RTE_LOG(INFO, VHOST_CONFIG,
 		"vring call idx:%d file:%d\n", file.index, file.fd);
 
-	/*
-	 * FIXME: VHOST_SET_VRING_CALL is the first per-vring message
-	 * we get, so we do vring queue pair allocation here.
-	 */
-	cur_qp_idx = file.index / VIRTIO_QNUM;
-	if (cur_qp_idx + 1 > dev->virt_qp_nb) {
-		if (alloc_vring_queue_pair(dev, cur_qp_idx) < 0)
-			return;
-	}
-
 	vq = dev->virtqueue[file.index];
-	assert(vq != NULL);
-
 	if (vq->callfd >= 0)
 		close(vq->callfd);
 
@@ -914,6 +901,46 @@  send_vhost_message(int sockfd, struct VhostUserMsg *msg)
 	return ret;
 }
 
+/*
+ * Allocate a queue pair if it hasn't been allocated yet
+ */
+static int
+vhost_user_check_and_alloc_queue_pair(struct virtio_net *dev, VhostUserMsg *msg)
+{
+	uint16_t vring_idx;
+	uint16_t qp_idx;
+
+	switch (msg->request) {
+	case VHOST_USER_SET_VRING_KICK:
+	case VHOST_USER_SET_VRING_CALL:
+	case VHOST_USER_SET_VRING_ERR:
+		vring_idx = msg->payload.u64 & VHOST_USER_VRING_IDX_MASK;
+		break;
+	case VHOST_USER_SET_VRING_NUM:
+	case VHOST_USER_SET_VRING_BASE:
+	case VHOST_USER_SET_VRING_ENABLE:
+		vring_idx = msg->payload.state.index;
+		break;
+	case VHOST_USER_SET_VRING_ADDR:
+		vring_idx = msg->payload.addr.index;
+		break;
+	default:
+		return 0;
+	}
+
+	qp_idx = vring_idx / VIRTIO_QNUM;
+	if (qp_idx >= VHOST_MAX_QUEUE_PAIRS) {
+		RTE_LOG(ERR, VHOST_CONFIG,
+			"invalid vring index: %u\n", vring_idx);
+		return -1;
+	}
+
+	if (dev->virtqueue[qp_idx])
+		return 0;
+
+	return alloc_vring_queue_pair(dev, qp_idx);
+}
+
 int
 vhost_user_msg_handler(int vid, int fd)
 {
@@ -943,6 +970,14 @@  vhost_user_msg_handler(int vid, int fd)
 	ret = 0;
 	RTE_LOG(INFO, VHOST_CONFIG, "read message %s\n",
 		vhost_message_str[msg.request]);
+
+	ret = vhost_user_check_and_alloc_queue_pair(dev, &msg);
+	if (ret < 0) {
+		RTE_LOG(ERR, VHOST_CONFIG,
+			"failed to alloc queue\n");
+		return -1;
+	}
+
 	switch (msg.request) {
 	case VHOST_USER_GET_FEATURES:
 		msg.payload.u64 = vhost_user_get_features();