[dpdk-dev] net/virtio: fix an incorrect behavior of device stop/start

Message ID 20170829082601.30349-1-tiwei.bie@intel.com (mailing list archive)
State Superseded, archived
Delegated to: Yuanhan Liu
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Tiwei Bie Aug. 29, 2017, 8:26 a.m. UTC
  After starting a device, the driver shouldn't deliver the
packets that already existed in the device before it is
started to the applications. This patch fixes this issue
by flushing the Rx queues when starting the device.

Fixes: a85786dc816f ("virtio: fix states handling during initialization")
Cc: stable@dpdk.org

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 drivers/net/virtio/virtio_ethdev.c |  6 ++++++
 drivers/net/virtio/virtio_rxtx.c   |  2 +-
 drivers/net/virtio/virtqueue.c     | 25 +++++++++++++++++++++++++
 drivers/net/virtio/virtqueue.h     |  5 +++++
 4 files changed, 37 insertions(+), 1 deletion(-)
  

Comments

Jens Freimann Aug. 30, 2017, 9:13 a.m. UTC | #1
Hi Tiwei,

On Tue, Aug 29, 2017 at 04:26:01PM +0800, Tiwei Bie wrote:
>After starting a device, the driver shouldn't deliver the
>packets that already existed in the device before it is
>started to the applications. This patch fixes this issue
>by flushing the Rx queues when starting the device.
>
>Fixes: a85786dc816f ("virtio: fix states handling during initialization")
>Cc: stable@dpdk.org
>
>Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>---
> drivers/net/virtio/virtio_ethdev.c |  6 ++++++
> drivers/net/virtio/virtio_rxtx.c   |  2 +-
> drivers/net/virtio/virtqueue.c     | 25 +++++++++++++++++++++++++
> drivers/net/virtio/virtqueue.h     |  5 +++++
> 4 files changed, 37 insertions(+), 1 deletion(-)

why don't we flush Tx queues as well?

>
>diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
>index e320811..6d60bc1 100644
>--- a/drivers/net/virtio/virtio_ethdev.c
>+++ b/drivers/net/virtio/virtio_ethdev.c
>@@ -1737,6 +1737,12 @@ virtio_dev_start(struct rte_eth_dev *dev)
> 		}
> 	}
>
>+	/* Flush the packets in Rx queues. */
>+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
>+		rxvq = dev->data->rx_queues[i];
>+		virtqueue_flush(rxvq->vq);
>+	}
>+

A little bit further down is a for loop going over rx queues calling
notify. Could we flush directly before the notify and save the
additional loop?

regards,
Jens
  
Tiwei Bie Aug. 30, 2017, 10:24 a.m. UTC | #2
Hi Jens,

On Wed, Aug 30, 2017 at 11:13:06AM +0200, Jens Freimann wrote:
> Hi Tiwei,
> 
> On Tue, Aug 29, 2017 at 04:26:01PM +0800, Tiwei Bie wrote:
> > After starting a device, the driver shouldn't deliver the
> > packets that already existed in the device before it is
> > started to the applications. This patch fixes this issue
> > by flushing the Rx queues when starting the device.
> > 
> > Fixes: a85786dc816f ("virtio: fix states handling during initialization")
> > Cc: stable@dpdk.org
> > 
> > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > ---
> > drivers/net/virtio/virtio_ethdev.c |  6 ++++++
> > drivers/net/virtio/virtio_rxtx.c   |  2 +-
> > drivers/net/virtio/virtqueue.c     | 25 +++++++++++++++++++++++++
> > drivers/net/virtio/virtqueue.h     |  5 +++++
> > 4 files changed, 37 insertions(+), 1 deletion(-)
> 
> why don't we flush Tx queues as well?
> 

The elements in the used ring of Tx queues won't be delivered
to the applications. They don't contain any (packet) data, and
will just be recycled during Tx. So there is no need to flush
the Tx queues.

> > 
> > diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> > index e320811..6d60bc1 100644
> > --- a/drivers/net/virtio/virtio_ethdev.c
> > +++ b/drivers/net/virtio/virtio_ethdev.c
> > @@ -1737,6 +1737,12 @@ virtio_dev_start(struct rte_eth_dev *dev)
> > 		}
> > 	}
> > 
> > +	/* Flush the packets in Rx queues. */
> > +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> > +		rxvq = dev->data->rx_queues[i];
> > +		virtqueue_flush(rxvq->vq);
> > +	}
> > +
> 
> A little bit further down is a for loop going over rx queues calling
> notify. Could we flush directly before the notify and save the
> additional loop?
> 

I saw there is also another `for' loop to dump the Rx queues.
And I think it makes the code more readable to flush the Rx
queues in a separate `for' loop too. Besides, this function
isn't performance critical. So I didn't combine them into one
`for' loop.

Best regards,
Tiwei Bie
  
Jens Freimann Sept. 1, 2017, 6:26 a.m. UTC | #3
On Wed, Aug 30, 2017 at 06:24:24PM +0800, Tiwei Bie wrote:
>Hi Jens,
>
>On Wed, Aug 30, 2017 at 11:13:06AM +0200, Jens Freimann wrote:
>> Hi Tiwei,
>>
>> On Tue, Aug 29, 2017 at 04:26:01PM +0800, Tiwei Bie wrote:
>> > After starting a device, the driver shouldn't deliver the
>> > packets that already existed in the device before it is
>> > started to the applications. This patch fixes this issue
>> > by flushing the Rx queues when starting the device.
>> >
>> > Fixes: a85786dc816f ("virtio: fix states handling during initialization")
>> > Cc: stable@dpdk.org
>> >
>> > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>> > ---
>> > drivers/net/virtio/virtio_ethdev.c |  6 ++++++
>> > drivers/net/virtio/virtio_rxtx.c   |  2 +-
>> > drivers/net/virtio/virtqueue.c     | 25 +++++++++++++++++++++++++
>> > drivers/net/virtio/virtqueue.h     |  5 +++++
>> > 4 files changed, 37 insertions(+), 1 deletion(-)
>>
>> why don't we flush Tx queues as well?
>>
>
>The elements in the used ring of Tx queues won't be delivered
>to the applications. They don't contain any (packet) data, and
>will just be recycled during Tx. So there is no need to flush
>the Tx queues.

ok, but it would hurt either because it's not performance relevant and
we could be sure to always start with an empty queue. It can be done
in a different patch though I guess. 

>> >
>> > diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
>> > index e320811..6d60bc1 100644
>> > --- a/drivers/net/virtio/virtio_ethdev.c
>> > +++ b/drivers/net/virtio/virtio_ethdev.c
>> > @@ -1737,6 +1737,12 @@ virtio_dev_start(struct rte_eth_dev *dev)
>> > 		}
>> > 	}
>> >
>> > +	/* Flush the packets in Rx queues. */
>> > +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
>> > +		rxvq = dev->data->rx_queues[i];
>> > +		virtqueue_flush(rxvq->vq);
>> > +	}
>> > +
>>
>> A little bit further down is a for loop going over rx queues calling
>> notify. Could we flush directly before the notify and save the
>> additional loop?
>>
>
>I saw there is also another `for' loop to dump the Rx queues.
>And I think it makes the code more readable to flush the Rx
>queues in a separate `for' loop too. Besides, this function
>isn't performance critical. So I didn't combine them into one
>`for' loop.

To me code is better readable when it is concise, so I'd still vote for
combining the loops if its logically equivalent.

On the other hand I think this should be fixed soon, so 

Reviewed-by: Jens Freimann <jfreimann@redhat.com> 


regards,
Jens
  
Tiwei Bie Sept. 1, 2017, 7:14 a.m. UTC | #4
On Fri, Sep 01, 2017 at 08:26:46AM +0200, Jens Freimann wrote:
> On Wed, Aug 30, 2017 at 06:24:24PM +0800, Tiwei Bie wrote:
> > Hi Jens,
> > 
> > On Wed, Aug 30, 2017 at 11:13:06AM +0200, Jens Freimann wrote:
> > > Hi Tiwei,
> > > 
> > > On Tue, Aug 29, 2017 at 04:26:01PM +0800, Tiwei Bie wrote:
> > > > After starting a device, the driver shouldn't deliver the
> > > > packets that already existed in the device before it is
> > > > started to the applications. This patch fixes this issue
> > > > by flushing the Rx queues when starting the device.
> > > >
> > > > Fixes: a85786dc816f ("virtio: fix states handling during initialization")
> > > > Cc: stable@dpdk.org
> > > >
> > > > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > > > ---
> > > > drivers/net/virtio/virtio_ethdev.c |  6 ++++++
> > > > drivers/net/virtio/virtio_rxtx.c   |  2 +-
> > > > drivers/net/virtio/virtqueue.c     | 25 +++++++++++++++++++++++++
> > > > drivers/net/virtio/virtqueue.h     |  5 +++++
> > > > 4 files changed, 37 insertions(+), 1 deletion(-)
> > > 
> > > why don't we flush Tx queues as well?
> > > 
> > 
> > The elements in the used ring of Tx queues won't be delivered
> > to the applications. They don't contain any (packet) data, and
> > will just be recycled during Tx. So there is no need to flush
> > the Tx queues.
> 
> ok, but it would hurt either because it's not performance relevant and
> we could be sure to always start with an empty queue. It can be done
> in a different patch though I guess.
> 

Yeah, I think it's not relevant to this (bug) fix. I prefer to
keep this fix (which is supposed to be backported to the stable
branch) small. And it's more like some refinements which won't
introduce any functional change, and can be done in a different
patch if someone wants it.

> > > >
> > > > diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
> > > > index e320811..6d60bc1 100644
> > > > --- a/drivers/net/virtio/virtio_ethdev.c
> > > > +++ b/drivers/net/virtio/virtio_ethdev.c
> > > > @@ -1737,6 +1737,12 @@ virtio_dev_start(struct rte_eth_dev *dev)
> > > > 		}
> > > > 	}
> > > >
> > > > +	/* Flush the packets in Rx queues. */
> > > > +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> > > > +		rxvq = dev->data->rx_queues[i];
> > > > +		virtqueue_flush(rxvq->vq);
> > > > +	}
> > > > +
> > > 
> > > A little bit further down is a for loop going over rx queues calling
> > > notify. Could we flush directly before the notify and save the
> > > additional loop?
> > > 
> > 
> > I saw there is also another `for' loop to dump the Rx queues.
> > And I think it makes the code more readable to flush the Rx
> > queues in a separate `for' loop too. Besides, this function
> > isn't performance critical. So I didn't combine them into one
> > `for' loop.
> 
> To me code is better readable when it is concise, so I'd still vote for
> combining the loops if its logically equivalent.
> 
> On the other hand I think this should be fixed soon, so
> 
> Reviewed-by: Jens Freimann <jfreimann@redhat.com>
> 

Thank you! :-)

It's not a big deal. I'd like to leave it up to the maintainers.
They can make the decision when applying the patch.

Best regards,
Tiwei Bie
  
Yuanhan Liu Oct. 19, 2017, 1:53 p.m. UTC | #5
On Fri, Sep 01, 2017 at 03:14:26PM +0800, Tiwei Bie wrote:
> > > > On Tue, Aug 29, 2017 at 04:26:01PM +0800, Tiwei Bie wrote:
> > > > > After starting a device, the driver shouldn't deliver the
> > > > > packets that already existed in the device before it is
> > > > > started to the applications.

Otherwise? I'm assuming you fixed a real issue? If so, it'd be better
if you can add a bit info about the issue.

> This patch fixes this issue
> > > > > by flushing the Rx queues when starting the device.
> > > > >
> > > > > Fixes: a85786dc816f ("virtio: fix states handling during initialization")
...
> > > > > @@ -1737,6 +1737,12 @@ virtio_dev_start(struct rte_eth_dev *dev)
> > > > > 		}
> > > > > 	}
> > > > >
> > > > > +	/* Flush the packets in Rx queues. */
> > > > > +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> > > > > +		rxvq = dev->data->rx_queues[i];
> > > > > +		virtqueue_flush(rxvq->vq);
> > > > > +	}
> > > > > +
> > > > 
> > > > A little bit further down is a for loop going over rx queues calling
> > > > notify. Could we flush directly before the notify and save the
> > > > additional loop?
> > > > 
> > > 
> > > I saw there is also another `for' loop to dump the Rx queues.
> > > And I think it makes the code more readable to flush the Rx
> > > queues in a separate `for' loop too. Besides, this function
> > > isn't performance critical. So I didn't combine them into one
> > > `for' loop.
> > 
> > To me code is better readable when it is concise, so I'd still vote for
> > combining the loops if its logically equivalent.
> > 
> > On the other hand I think this should be fixed soon, so
> > 
> > Reviewed-by: Jens Freimann <jfreimann@redhat.com>
> > 
> 
> Thank you! :-)
> 
> It's not a big deal. I'd like to leave it up to the maintainers.
> They can make the decision when applying the patch.

I agree with Jens here. We already have too many for loops in this
function. Let's not add yet another one. Besides that, the VIRTQUEU_DUMP
loop probably should also be removed and more it to the notify loop.

	--yliu
  

Patch

diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index e320811..6d60bc1 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -1737,6 +1737,12 @@  virtio_dev_start(struct rte_eth_dev *dev)
 		}
 	}
 
+	/* Flush the packets in Rx queues. */
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxvq = dev->data->rx_queues[i];
+		virtqueue_flush(rxvq->vq);
+	}
+
 	/*Notify the backend
 	 *Otherwise the tap backend might already stop its queue due to fullness.
 	 *vhost backend will have no chance to be waked up
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index e30377c..5e5fcfc 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -81,7 +81,7 @@  virtio_dev_rx_queue_done(void *rxq, uint16_t offset)
 	return VIRTQUEUE_NUSED(vq) >= offset;
 }
 
-static void
+void
 vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx)
 {
 	struct vring_desc *dp, *dp_tail;
diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c
index 9ad77b8..c3a536f 100644
--- a/drivers/net/virtio/virtqueue.c
+++ b/drivers/net/virtio/virtqueue.c
@@ -59,3 +59,28 @@  virtqueue_detatch_unused(struct virtqueue *vq)
 		}
 	return NULL;
 }
+
+/* Flush the elements in the used ring. */
+void
+virtqueue_flush(struct virtqueue *vq)
+{
+	struct vring_used_elem *uep;
+	struct vq_desc_extra *dxp;
+	uint16_t used_idx, desc_idx;
+	uint16_t nb_used, i;
+
+	nb_used = VIRTQUEUE_NUSED(vq);
+
+	for (i = 0; i < nb_used; i++) {
+		used_idx = vq->vq_used_cons_idx & (vq->vq_nentries - 1);
+		uep = &vq->vq_ring.used->ring[used_idx];
+		desc_idx = (uint16_t)uep->id;
+		dxp = &vq->vq_descx[desc_idx];
+		if (dxp->cookie != NULL) {
+			rte_pktmbuf_free(dxp->cookie);
+			dxp->cookie = NULL;
+		}
+		vq->vq_used_cons_idx++;
+		vq_ring_free_chain(vq, desc_idx);
+	}
+}
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 2e12086..9fffcd8 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -304,6 +304,9 @@  void virtqueue_dump(struct virtqueue *vq);
  */
 struct rte_mbuf *virtqueue_detatch_unused(struct virtqueue *vq);
 
+/* Flush the elements in the used ring. */
+void virtqueue_flush(struct virtqueue *vq);
+
 static inline int
 virtqueue_full(const struct virtqueue *vq)
 {
@@ -312,6 +315,8 @@  virtqueue_full(const struct virtqueue *vq)
 
 #define VIRTQUEUE_NUSED(vq) ((uint16_t)((vq)->vq_ring.used->idx - (vq)->vq_used_cons_idx))
 
+void vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx);
+
 static inline void
 vq_update_avail_idx(struct virtqueue *vq)
 {