[dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
Linhaifeng
haifeng.lin at huawei.com
Fri Jan 30 11:33:41 CET 2015
On 2015/1/30 16:20, Xu, Qian Q wrote:
> Haifeng
> Could you give more information so that we can reproduce your issue? Thanks.
> 1. What's your dpdk package, based on which branch, with Huawei's vhost-user's patches?
Not with Huawei's patches.I implement a demo before Huawei's patches with OVDK's vhost_dequeue_burst and vhost_enqueue_burst.
Now I'm trying to run vhost-user with dpdk vhost example(master branch).
> 2. What's your step and command to launch vhost sample?
BTW.How to run vhost example with vm2vm mode?
Is VM2VM means i can send packet from vm1 to vm2?
I setup with follow steps but can't send packet in VM:
mount -t hugetlbfs nodev /mnt/huge -o pagesize=1G
mount -t hugetlbfs nodev /dev/hugepages -o pagesize=2M
echo 8192 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
modprobe uio
insmod ${RTE_SDK}/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
dpdk_nic_bind.py -b igb_uio 82:00.0 82:00.1
rmmod vhost_net
modprobe cuse
insmod ${RTE_SDK}/lib/librte_vhost/eventfd_link/eventfd_link.ko
${RTE_SDK}/examples/vhost/build/app/vhost-switch -c 0x300 -n 4 --huge-dir /mnt/huge -m 2048 -- -p 0x1 --vm2vm 1
qemu-wrap.py -enable-kvm -mem-path /mnt/huge/ -mem-prealloc -smp 2 \
-netdev tap,id=hostnet1,vhost=on,ifname=port0 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:00:00:00:00:01 -hda /mnt/sdb/linhf/vm1.img -m 2048 -vnc :0
qemu-wrap.py -enable-kvm -mem-path /mnt/huge/ -mem-prealloc -smp 2 \
-netdev tap,id=hostnet1,vhost=on,ifname=port0 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:00:00:00:00:02 -hda /mnt/sdb/linhf/vm2.img -m 2048 -vnc :1
> 3. What is mz? Your internal tool? I can't yum install mz or download mz tool.
http://www.perihel.at/sec/mz/
> 4. As to your test scenario, I understand it in this way: virtio1 in VM1, virtio2 in VM2, then let virtio1 send packages to virtio2, the problem is that after 3 hours, virtio2 can't receive packets, but virtio1 is still sending packets, am I right? So mz is like a packet generator to send packets, right?
Yes,you are right.
>
>
> -----Original Message-----
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Linhaifeng
> Sent: Thursday, January 29, 2015 9:51 PM
> To: Xie, Huawei; dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
>
>
>
> On 2015/1/29 21:00, Xie, Huawei wrote:
>>
>>
>>> -----Original Message-----
>>> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
>>> Sent: Thursday, January 29, 2015 8:39 PM
>>> To: Xie, Huawei; dev at dpdk.org
>>> Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer
>>> when there is no buffer
>>>
>>>
>>>
>>> On 2015/1/29 18:39, Xie, Huawei wrote:
>>>
>>>>> - if (count == 0)
>>>>> + /* If there is no buffers we should notify guest to fill.
>>>>> + * This is need when guest use virtio_net driver(not pmd).
>>>>> + */
>>>>> + if (count == 0) {
>>>>> + if (!(vq->avail->flags &
>>>>> VRING_AVAIL_F_NO_INTERRUPT))
>>>>> + eventfd_write((int)vq->kickfd, 1);
>>>>> return 0;
>>>>> + }
>>>>
>>>> Haifeng:
>>>> Is it the root cause and is it protocol required?
>>>> Could you give a detailed description for that scenario?
>>>>
>>>
>>> I use mz to send data from one VM1 to VM2.The two VM use virtio-net driver.
>>> VM1 execute follow script:
>>> for((i=0;i<999999999;i++));
>>> do
>>> mz eth0 -t udp -A 1.1.1.1 -B 1.1.1.2 -a 00:00:00:00:00:01 -b
>>> 00:00:00:00:00:02 -c
>>> 10000000 -p 512
>>> sleep 4
>>> done
>>>
>>> VM2 execute follow command to watch:
>>> watch -d ifconfig
>>>
>>> After many hours VM2 stop to receive data.
>>>
>>> Could you test it ?
>>
>>
>> We could try next week after I send the whole patch.
>> How many hours? Is it reproducible at your side? I inject packets through packet generator to guest for more than ten hours, haven't met issues.
>
> About three hours.
> What kind of driver you used in guest?virtio-net-pmd or virtio-net?
>
>
>> As I said in another mail sent to you, could you dump the status of vring if you still have the spot?
>
> How to dump the status of vring in guest?
>
>> Could you please also reply to that mail?
>>
>
> Which mail?
>
>
>> For the patch, if we have no root cause, I prefer not to apply it, so that we don't send more interrupts than needed to guest to affect performance.
>
> I found that if we add this notify the performance is better(growth of 100kpps when use 64byte UDP packets)
>
>> People could temporarily apply this patch as a work around.
>>
>> Or anyone
>>
>
> OK.I'm also not sure about this bug.I think i should do something to found the real reason.
>
>>
>>> --
>>> Regards,
>>> Haifeng
>>
>>
>>
>
--
Regards,
Haifeng
More information about the dev
mailing list