[dpdk-dev] [PATCH v2 0/7] virtio ring layout optimization and simple rx/tx processing

Huawei Xie huawei.xie at intel.com
Sun Oct 18 08:28:25 CEST 2015


In DPDK based switching enviroment, mostly vhost runs on a dedicated core
while virtio processing in guest VMs runs on different cores.
Take RX for example, with generic implementation, for each guest buffer,
a) virtio driver allocates a descriptor from free descriptor list
b) modify the entry of avail ring to point to allocated descriptor
c) after packet is received, free the descriptor

When vhost fetches the avail ring, it needs to fetch the modified L1 cache from
virtio core, which is a heavy cost in current CPU implementation.

This idea of this optimization is:
    allocate the fixed descriptor for each entry of avail ring.
and avail ring will always be the same during the run.
This removes L1 cache transfer from virtio core to vhost core for avail ring.
Besides, no descriptor free and allocation is needed.

Most importantly, this makes vector procesing possible to further accelerate
the processing.

This is the layout for the avail ring(take 256 ring entries for example), with
each entry pointing to the descriptor with the same index.
                    avail
                    idx
                    +
                    |
+----+----+---+-------------+------+
| 0  | 1  | 2 | ... |  254  | 255  |  avail ring
+-+--+-+--+-+-+---------+---+--+---+
  |    |    |       |   |      |
  |    |    |       |   |      |
  v    v    v       |   v      v
+-+--+-+--+-+-+---------+---+--+---+
| 0  | 1  | 2 | ... |  254  | 255  |  desc ring
+----+----+---+-------------+------+
                    |
                    |
+----+----+---+-------------+------+
| 0  | 1  | 2 |     |  254  | 255  |  used ring
+----+----+---+-------------+------+
                    |
                    +

This is the ring layout for TX.
As we need one virtio header for each xmit packet, we have 128 slots available.

                         ++
                         ||
                         ||
+-----+-----+-----+--------------+------+------+------+
|  0  |  1  | ... |  127 || 128  | 129  | ...  | 255  |   avail ring
+--+--+--+--+-----+---+------+---+--+---+------+--+---+
   |     |            |  ||  |      |             |
   v     v            v  ||  v      v             v
+--+--+--+--+-----+---+------+---+--+---+------+--+---+
| 127 | 128 | ... |  255 || 127  | 128  | ...  | 255  |   desc ring for virtio_net_hdr
+--+--+--+--+-----+---+------+---+--+---+------+--+---+
   |     |            |  ||  |      |             |
   v     v            v  ||  v      v             v
+--+--+--+--+-----+---+------+---+--+---+------+--+---+
|  0  |  1  | ... |  127 ||  0   |  1   | ...  | 127  |   desc ring for tx dat
+-----+-----+-----+--------------+------+------+------+
                         ||
                         ||
                         ++

Performance boost could be observed only if the virtio backend isn't the bottleneck or in VM2VM
case.
There are also several vhost optimization patches to be submitted later.

Changes in v2:
- Remove the configure macro
- Enable simple R/TX processing when user specifies simple txq flags
- Reword some comments and commit messages

Huawei Xie (7):
  virtio: add virtio_rxtx.h header file
  virtio: add software rx ring, fake_buf into virtqueue
  virtio: rx/tx ring layout optimization
  virtio: fill RX avail ring with blank mbufs
  virtio: virtio vec rx
  virtio: simple tx routine
  virtio: pick simple rx/tx func

 drivers/net/virtio/Makefile             |   2 +-
 drivers/net/virtio/virtio_ethdev.c      |  13 ++
 drivers/net/virtio/virtio_ethdev.h      |   5 +
 drivers/net/virtio/virtio_rxtx.c        |  53 ++++-
 drivers/net/virtio/virtio_rxtx.h        |  39 ++++
 drivers/net/virtio/virtio_rxtx_simple.c | 403 ++++++++++++++++++++++++++++++++
 drivers/net/virtio/virtqueue.h          |   5 +
 7 files changed, 517 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/virtio/virtio_rxtx.h
 create mode 100644 drivers/net/virtio/virtio_rxtx_simple.c

-- 
1.8.1.4



More information about the dev mailing list