[dpdk-dev] [PATCH v2] add mtu set in virtio

Dey, Souvik sodey at sonusnet.com
Fri Sep 9 05:44:52 CEST 2016


Are we good to get this in for 16.11 and then revisit this when the VHOST improvements comes in. This will atleast take care of the gap between 16.11 and VHOST improvements coming in.

--
Regards,
Souvik

-----Original Message-----
From: Yuanhan Liu [mailto:yuanhan.liu at linux.intel.com] 
Sent: Thursday, September 8, 2016 3:57 AM
To: Maxime Coquelin <maxime.coquelin at redhat.com>
Cc: Dey, Souvik <sodey at sonusnet.com>; stephen at networkplumber.org; huawei.xie at intel.com; dev at dpdk.org
Subject: Re: [dpdk-dev] [PATCH v2] add mtu set in virtio

On Thu, Sep 08, 2016 at 09:50:34AM +0200, Maxime Coquelin wrote:
> 
> 
> On 09/08/2016 09:30 AM, Yuanhan Liu wrote:
> >On Wed, Sep 07, 2016 at 11:16:47AM +0200, Maxime Coquelin wrote:
> >>
> >>
> >>On 09/07/2016 05:25 AM, Yuanhan Liu wrote:
> >>>On Tue, Aug 30, 2016 at 09:57:39AM +0200, Maxime Coquelin wrote:
> >>>>Hi Souvik,
> >>>>
> >>>>On 08/30/2016 01:02 AM, souvikdey33 wrote:
> >>>>>Signed-off-by: Souvik Dey <sodey at sonusnet.com>
> >>>>>
> >>>>>Fixes: 1fb8e8896ca8 ("Signed-off-by: Souvik Dey 
> >>>>><sodey at sonusnet.com>")
> >>>>>Reviewed-by: Stephen Hemminger <stephen at networkplumber.org>
> >>>>>
> >>>>>Virtio interfaces should also support setting of mtu, as in case 
> >>>>>of cloud it is expected to have the consistent mtu across the 
> >>>>>infrastructure that the dhcp server sends and not hardcoded to 1500(default).
> >>>>>---
> >>>>>drivers/net/virtio/virtio_ethdev.c | 12 ++++++++++++
> >>>>>1 file changed, 12 insertions(+)
> >>>>
> >>>>FYI, there are some on-going changes in the VIRTIO specification 
> >>>>so that the VHOST interface exposes its MTU to its VIRTIO peer.
> >>>>It may also be used as an alternative of what you patch achieves.
> >>>>
> >>>>I am working on its implementation in Qemu/DPDK, our goal being to 
> >>>>reduce performance drops for small packets with Rx mergeable 
> >>>>buffers feature enabled.
> >>>
> >>>Mind to educate me a bit on how that works?
> >>
> >>Of course.
> >>
> >>Basically, this is a way to advise the MTU we want in the guest.
> >>In the guest, if GRO is not enabled:
> >> - In case of Kernel virtio-net, it could be used to size the SKBs 
> >>at the expected MTU. If possible, we could disable Rx mergeable 
> >>buffers.
> >> - In case of virtio PMD, if the MTU advised by host is lower than 
> >>the pre-allocated mbuf size for the receive queue, then we should 
> >>not need mergeable buffers.
> >
> >Thanks for the explanation!
> >
> >I see. So, the point is to avoid using mergeable buffers while it is 
> >enabled.
> >
> >>Does that sound reasonnable?
> >
> >Yeah, maybe. Just don't know how well it may work in real life. Have 
> >you got any rought data so far?
> 
> The PoC is not done yet, only Qemu part is implemented.
> But what we noticed is that for small packets, we have a 50% 
> degradation when rx mergeable buffers are on when running PVP 
> use-case.
> 
> Main part of the degradation is due an additional cache-miss in 
> virtio-pmd receive path, because we fetch the header to get the number 
> of buffer.
> 
> When sending only small packets and removing this access, we recover 
> 25% of the degradation.
> 
> The 25% remaining part may be reduced significantly with Zhihong series.
> 
> Hope it answer your questions.

Yes, it does and thanks for the info.

	--yliu


More information about the dev mailing list