[dpdk-dev] [PATCH] vhost: support Generic Segmentation Offload

Hu, Jiayu jiayu.hu at intel.com
Thu Dec 7 07:30:29 CET 2017


Hi Maxime,

> -----Original Message-----
> From: Maxime Coquelin [mailto:maxime.coquelin at redhat.com]
> Sent: Wednesday, December 6, 2017 4:34 PM
> To: Hu, Jiayu <jiayu.hu at intel.com>; dev at dpdk.org
> Cc: yliu at fridaylinux.org; Tan, Jianfeng <jianfeng.tan at intel.com>
> Subject: Re: [dpdk-dev] [PATCH] vhost: support Generic Segmentation
> Offload
> 
> Hi Jiayu,
> 
> On 11/28/2017 06:28 AM, Jiayu Hu wrote:
> > In virtio, Generic Segmentation Offload (GSO) is the feature for the
> > backend, which means the backend can receive packets with any GSO
> > type.
> >
> > Virtio-net enables the GSO feature by default, and vhost-net supports it.
> > To make live migration from vhost-net to vhost-user possible, this patch
> > enables GSO for vhost-user.
> 
> Please note that the application relying on Vhost library may disable
> some features, breaking the migration from vhost-net to vhost-user even
> if all features are supported in the vhost-user lib.
> 
> For example, ovs-dpdk disables the following features:
>      err = rte_vhost_driver_disable_features(dev->vhost_id,
>                                  1ULL << VIRTIO_NET_F_HOST_TSO4
>                                  | 1ULL << VIRTIO_NET_F_HOST_TSO6
>                                  | 1ULL << VIRTIO_NET_F_CSUM);
> 
> 
> > Signed-off-by: Jiayu Hu <jiayu.hu at intel.com>
> > ---
> >   lib/librte_vhost/vhost.h | 1 +
> >   1 file changed, 1 insertion(+)
> >
> > diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
> > index 1cc81c1..04f54cb 100644
> > --- a/lib/librte_vhost/vhost.h
> > +++ b/lib/librte_vhost/vhost.h
> > @@ -204,6 +204,7 @@ struct vhost_msg {
> >   				(1ULL << VIRTIO_F_VERSION_1)   | \
> >   				(1ULL << VHOST_F_LOG_ALL)      | \
> >   				(1ULL <<
> VHOST_USER_F_PROTOCOL_FEATURES) | \
> > +				(1ULL << VIRTIO_NET_F_GSO) | \
> 
> This feature is also enabled by default in QEMU, and seems also to be
> acked by default in the virtio-net kernel driver.
> 
> Does it have an impact on performance? Be it good or bad.
> 
> How to test it?

VIRTIO_NET_F_GSO is the combination of all backend GSO types, like
VIRTIO_NET_F_HOST_UFO and VIRTIO_NET_F_HOST_ECN. Supporting
VIRTIO_NET_F_GSO equals to supporting all backend GSO types.

In the virtio-net driver, VIRTIO_NET_F_GSO influences the offloading abilities of
virtio-net devices. When VIRTIO_NET_F_GSO is negotiated, the virtio-net device has
TSO(_ECN) and UFO turned on by default. It equals to enabling "host_ufo", "host_tso4/6"
and "host_ecn".
 
About the performance, when VIRTIO_NET_F_GSO is enabled, the device
can send large TCP/UDP packets (exceeding MTU) to the backend. Large packets
can reduce the per-packet overhead. Therefore, I think it's good for performance.

We can test this feature with the following configuration:
Environment:
- one server with two physical interfaces (p1 and p2).
- p1 and p2 are connected physically. p1 is assigned to DPDK and p2 is to kernel.

Steps:
- launch testpmd with p1 and one vhost-user port
- launching qemu with command "gso=on, csum=on". In the VM, you can see
  TSO_ECN and UFO of the virtio-net port are enabled by default.
- Run "iperf -u -s ..." on p2
- run "iperf -u -c ... -l 7000B" in the VM. Therefore, iperf client will send large
   UDP packets
- "show port xstats all" in testpmd. You can see the vhost-user port receives
   large UDP packets from the frontend.

You can also use the above steps to test the vhost-user host_ufo feature.

Thanks,
Jiayu
> 
> >   				(1ULL << VIRTIO_NET_F_HOST_TSO4) | \
> >   				(1ULL << VIRTIO_NET_F_HOST_TSO6) | \
> >   				(1ULL << VIRTIO_NET_F_CSUM)    | \
> >
> 
> Thanks,
> Maxime


More information about the dev mailing list