[dpdk-dev] [PATCH 0/7] virtio/vhost: Add MTU feature support

Yuanhan Liu yuanhan.liu at linux.intel.com
Thu Feb 23 08:10:00 CET 2017


On Mon, Feb 13, 2017 at 03:28:13PM +0100, Maxime Coquelin wrote:
> This series adds support to new Virtio's MTU feature[1].

Seems you missed a link here?

> The MTU
> value is set via QEMU parameters.
> 
> If the feature is negotiated (i.e supported by both host andcguest,
> and valid MTU value is set in QEMU via its host_mtu parameter), QEMU
> shares the configured MTU value throught dedicated Vhost protocol
> feature.
> 
> On vhost side, the value is stored in the virtio_net structure, and
> made available to the application thanks to new vhost lib's
> rte_vhost_mtu_get() function.
> 
> rte_vhost_mtu_set() functions is implemented, but only succeed if the
> application sets the same value as the one set in QEMU. Idea is that
> it would be used for the application to ensure configured MTU value
> is consistent, but maybe the mtu_get() API is enough, and mtu_set()
> could just be dropped.

If the vhost MTU is designed to be read-only, then we may should drop
the set_mtu function.

> Vhost PMD mtu_set callback is also implemented
> in the same spirit.
> 
> To be able to set eth_dev's MTU value at the right time, i.e. to call
> rte_vhost_mtu_get() just after Virtio features have been negotiated
> and before the device is really started, a new vhost flag has been
> introduced (VIRTIO_DEV_READY), because the VIRTIO_DEV_RUNNING flag is
> set too late (after .new_device() ops is called).

Okay, and I think this kind of info should be in corresponding commit log.

> Regarding valid MTU values, the maximum MTU value accepted on vhost
> side is 65535 bytes, as defined in Virtio Spec and supported in
> Virtio-net Kernel driver. But in Virtio PMD, current maximum frame
> size is 9728 bytes (~9700 bytes MTU). So maximum MTU size accepted in
> Virtio PMD is the minimum between ~9700 bytes and host's MTU.
> 
> Initially, we thought about disabling the rx-mergeable feature when
> MTU value was low enough to ensure all received packets would fit in
> receive buffers (when offloads are disabled). Doing this, we would
> save one cache-miss in the receive path. Problem is that we don't
> know the buffers size at Virtio feature neogotiation time.
> It might be possible for the application to call the configure
> callback again once the Rx queue is set up, but it seems a bit hacky.

Worse, if multiple queue is involved, one queue could have it's own
mempool, meaning the buffer size could be different, whereas the
MTU feature is global.

	--yliu

> So I decided to skip this optimization for now, even if feedback and
> are of course appreciated.
> 
> Finally, this series also adds MTU value printing  in testpmd's
> "show port info" command when non-zero.
> 
> This series target v17.05 release.
> 
> Cheers,
> Maxime
> 
> Maxime Coquelin (7):
>   vhost: Enable VIRTIO_NET_F_MTU feature
>   vhost: vhost-user: Add MTU protocol feature support
>   vhost: Add new ready status flag
>   vhost: Add API to get/set MTU value
>   net/vhost: Implement mtu_set callback
>   net/virtio: Add MTU feature support
>   app/testpmd: print MTU value in show port info
> 
>  app/test-pmd/config.c               |  5 +++++
>  doc/guides/nics/features/vhost.ini  |  1 +
>  doc/guides/nics/features/virtio.ini |  1 +
>  drivers/net/vhost/rte_eth_vhost.c   | 18 +++++++++++++++
>  drivers/net/virtio/virtio_ethdev.c  | 22 +++++++++++++++++--
>  drivers/net/virtio/virtio_ethdev.h  |  3 ++-
>  drivers/net/virtio/virtio_pci.h     |  3 +++
>  lib/librte_vhost/rte_virtio_net.h   | 31 ++++++++++++++++++++++++++
>  lib/librte_vhost/vhost.c            | 42 ++++++++++++++++++++++++++++++++++-
>  lib/librte_vhost/vhost.h            |  9 +++++++-
>  lib/librte_vhost/vhost_user.c       | 44 +++++++++++++++++++++++++++++++------
>  lib/librte_vhost/vhost_user.h       |  5 ++++-
>  12 files changed, 171 insertions(+), 13 deletions(-)
> 
> -- 
> 2.9.3


More information about the dev mailing list