[2/5] net/virtio: fix in-order Tx path for split ring

Message ID 20190219105951.31046-3-tiwei.bie@intel.com (mailing list archive)
State Accepted, archived
Delegated to: Maxime Coquelin
Headers
Series Fixes and enhancements for Tx path in Virtio PMD |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Tiwei Bie Feb. 19, 2019, 10:59 a.m. UTC
  When IN_ORDER feature is negotiated, device may just write out a
single used ring entry for a batch of buffers:

"""
Some devices always use descriptors in the same order in which they
have been made available. These devices can offer the VIRTIO_F_IN_ORDER
feature. If negotiated, this knowledge allows devices to notify the
use of a batch of buffers to the driver by only writing out a single
used ring entry with the id corresponding to the head entry of the
descriptor chain describing the last buffer in the batch.

The device then skips forward in the ring according to the size of
the batch. Accordingly, it increments the used idx by the size of
the batch.

The driver needs to look up the used id and calculate the batch size
to be able to advance to where the next used ring entry will be written
by the device.
"""

Currently, the in-order Tx path in split ring can't handle this.
With this patch, driver will allocate desc_extra[] based on the
index in avail/used ring instead of the index in descriptor table.
And driver can just relay on the used->idx written by device to
reclaim the descriptors and Tx buffers.

Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx")
Cc: stable@dpdk.org

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 drivers/net/virtio/virtio_rxtx.c | 28 ++++++++++------------------
 1 file changed, 10 insertions(+), 18 deletions(-)
  

Comments

Maxime Coquelin Feb. 21, 2019, 11:08 a.m. UTC | #1
On 2/19/19 11:59 AM, Tiwei Bie wrote:
> When IN_ORDER feature is negotiated, device may just write out a
> single used ring entry for a batch of buffers:
> 
> """
> Some devices always use descriptors in the same order in which they
> have been made available. These devices can offer the VIRTIO_F_IN_ORDER
> feature. If negotiated, this knowledge allows devices to notify the
> use of a batch of buffers to the driver by only writing out a single
> used ring entry with the id corresponding to the head entry of the
> descriptor chain describing the last buffer in the batch.
> 
> The device then skips forward in the ring according to the size of
> the batch. Accordingly, it increments the used idx by the size of
> the batch.
> 
> The driver needs to look up the used id and calculate the batch size
> to be able to advance to where the next used ring entry will be written
> by the device.
> """
> 
> Currently, the in-order Tx path in split ring can't handle this.
> With this patch, driver will allocate desc_extra[] based on the
> index in avail/used ring instead of the index in descriptor table.
> And driver can just relay on the used->idx written by device to
> reclaim the descriptors and Tx buffers.
> 
> Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
>   drivers/net/virtio/virtio_rxtx.c | 28 ++++++++++------------------
>   1 file changed, 10 insertions(+), 18 deletions(-)
> 
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime
  

Patch

diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index b07ceac6d..407f58bce 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -279,7 +279,7 @@  virtio_xmit_cleanup(struct virtqueue *vq, uint16_t num)
 static void
 virtio_xmit_cleanup_inorder(struct virtqueue *vq, uint16_t num)
 {
-	uint16_t i, used_idx, desc_idx = 0, last_idx;
+	uint16_t i, idx = vq->vq_used_cons_idx;
 	int16_t free_cnt = 0;
 	struct vq_desc_extra *dxp = NULL;
 
@@ -287,27 +287,16 @@  virtio_xmit_cleanup_inorder(struct virtqueue *vq, uint16_t num)
 		return;
 
 	for (i = 0; i < num; i++) {
-		struct vring_used_elem *uep;
-
-		used_idx = vq->vq_used_cons_idx & (vq->vq_nentries - 1);
-		uep = &vq->vq_ring.used->ring[used_idx];
-		desc_idx = (uint16_t)uep->id;
-
-		dxp = &vq->vq_descx[desc_idx];
-		vq->vq_used_cons_idx++;
-
+		dxp = &vq->vq_descx[idx++ & (vq->vq_nentries - 1)];
+		free_cnt += dxp->ndescs;
 		if (dxp->cookie != NULL) {
 			rte_pktmbuf_free(dxp->cookie);
 			dxp->cookie = NULL;
 		}
 	}
 
-	last_idx = desc_idx + dxp->ndescs - 1;
-	free_cnt = last_idx - vq->vq_desc_tail_idx;
-	if (free_cnt <= 0)
-		free_cnt += vq->vq_nentries;
-
-	vq_ring_free_inorder(vq, last_idx, free_cnt);
+	vq->vq_free_cnt += free_cnt;
+	vq->vq_used_cons_idx = idx;
 }
 
 static inline int
@@ -556,7 +545,7 @@  virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq,
 
 	while (i < num) {
 		idx = idx & (vq->vq_nentries - 1);
-		dxp = &vq->vq_descx[idx];
+		dxp = &vq->vq_descx[vq->vq_avail_idx & (vq->vq_nentries - 1)];
 		dxp->cookie = (void *)cookies[i];
 		dxp->ndescs = 1;
 
@@ -708,7 +697,10 @@  virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie,
 
 	head_idx = vq->vq_desc_head_idx;
 	idx = head_idx;
-	dxp = &vq->vq_descx[idx];
+	if (in_order)
+		dxp = &vq->vq_descx[vq->vq_avail_idx & (vq->vq_nentries - 1)];
+	else
+		dxp = &vq->vq_descx[idx];
 	dxp->cookie = (void *)cookie;
 	dxp->ndescs = needed;