[dpdk-dev] [PATCH v4] vhost: Add indirect descriptors support to the TX path

Maxime Coquelin maxime.coquelin at redhat.com
Mon Oct 17 13:23:23 CEST 2016



On 10/14/2016 05:50 PM, Maxime Coquelin wrote:
>
>
> On 10/14/2016 09:24 AM, Wang, Zhihong wrote:
>>
>>
>>> -----Original Message-----
>>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Maxime Coquelin
>>> Sent: Tuesday, September 27, 2016 4:43 PM
>>> To: yuanhan.liu at linux.intel.com; Xie, Huawei <huawei.xie at intel.com>;
>>> dev at dpdk.org
>>> Cc: vkaplans at redhat.com; mst at redhat.com;
>>> stephen at networkplumber.org; Maxime Coquelin
>>> <maxime.coquelin at redhat.com>
>>> Subject: [dpdk-dev] [PATCH v4] vhost: Add indirect descriptors
>>> support to
>>> the TX path
>>>
>>> Indirect descriptors are usually supported by virtio-net devices,
>>> allowing to dispatch a larger number of requests.
>>>
>>> When the virtio device sends a packet using indirect descriptors,
>>> only one slot is used in the ring, even for large packets.
>>>
>>> The main effect is to improve the 0% packet loss benchmark.
>>> A PVP benchmark using Moongen (64 bytes) on the TE, and testpmd
>>> (fwd io for host, macswap for VM) on DUT shows a +50% gain for
>>> zero loss.
>>>
>>> On the downside, micro-benchmark using testpmd txonly in VM and
>>> rxonly on host shows a loss between 1 and 4%.i But depending on
>>> the needs, feature can be disabled at VM boot time by passing
>>> indirect_desc=off argument to vhost-user device in Qemu.
>>>
>>> Signed-off-by: Maxime Coquelin <maxime.coquelin at redhat.com>
>>
>>
>> Hi Maxime,
>>
>> Seems this patch can't with Windows virtio guest in my test.
>> Have you done similar tests before?
>>
>> The way I test:
>>
>>  1. Make sure https://patchwork.codeaurora.org/patch/84339/ is applied
>>
>>  2. Start testpmd with iofwd between 2 vhost ports
>>
>>  3. Start 2 Windows guests connected to the 2 vhost ports
>>
>>  4. Disable firewall and assign IP to each guest using ipconfig
>>
>>  5. Use ping to test connectivity
>>
>> When I disable this patch by setting:
>>
>>     0ULL << VIRTIO_RING_F_INDIRECT_DESC,
>>
>> the connection is fine, but when I restore:
>>
>>     1ULL << VIRTIO_RING_F_INDIRECT_DESC,
>>
>> the connection is broken.
>
> Just noticed I didn't reply to all this morning.
> I sent a debug patch to Zhihong, which shows that indirect desc chaining
> looks OK.
>
> On my side, I just setup 2 Windows 2016 VMs, and confirm the issue.
> I'll continue the investigation early next week.

The root cause is identified.
When INDIRECT_DESC feature is negotiated, Windows guest uses indirect
for both Tx and Rx descriptors, whereas Linux guests (Virtio PMD &
virtio-net kernel driver) use indirect only for Tx.

I'll implement indirect support for the Rx path in vhost lib, but the
change will be too big for -rc release.
I propose in the mean time to disable INDIRECT_DESC feature in vhost
lib, we can still enable it locally for testing.

Yuanhan, is it ok for you?

> Has anyone already tested Windows guest with vhost-net, which also has
> indirect descs support?

I tested and confirm it works with vhost-net.

Regards,
Maxime


More information about the dev mailing list