[dpdk-dev] [PATCH RFC v2 08/12] lib/librte_vhost: vhost-user support

Xie, Huawei huawei.xie at intel.com
Thu Dec 11 21:16:54 CET 2014



> -----Original Message-----
> From: Xie, Huawei
> Sent: Thursday, December 11, 2014 10:13 AM
> To: 'Linhaifeng'; dev at dpdk.org
> Cc: haifeng.lin at intel.com
> Subject: RE: [dpdk-dev] [PATCH RFC v2 08/12] lib/librte_vhost: vhost-user
> support
> 
> >
> > Only support one vhost-user port ?
> 
> Do you mean vhost server by "port"?
> If that is the case, yes, now only one vhost server is supported for multiple virtio
> devices.
> As stated in the cover letter, we have requirement and plan for multiple server
> support,
> though I am not sure if it is absolutely necessary.
> 
> >
> > Can you mmap the region if gpa is 0? When i run VM with two numa node
> (qemu
> > will create two hugepage file) found that always failed to mmap with the
> region
> > which gpa is 0.
> 
> Current implementation doesn't assume there is only one huge page file to back
> the guest memory.
> It maps every region using the fd of that region.
> Could you please paste your guest VM command line here?
> 
> >
> > BTW can we ensure the memory regions cover with all the memory of
> hugepage
> > for VM?
> 
> I think so, because virtio devices could use any normal guest memory, but we
> needn't ensure that.
> We only need to map the region passed to us from qemu vhost, which should be
> enough to translate
> the GPA in vring from virtio in guest, otherwise it is the bug of qemu vhost.

I see your post to qemu mailing list. Would you mind I paste here?
Qemu use two 1GB huge pages files to map the guest 2GB memory, and indeed we get "2GB" memory region.

The problem is the all the 2GB memory maps to the first huge page file node0.MvcPyi.
Seems a bug.


qemu command:
-m 2048 -smp 2,sockets=2,cores=1,threads=1
-object memory-backend-file,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,share=on,size=1024M,id=ram-node0 -numa node,nodeid=0,cpus=0,memdev=ram-node0
-object memory-backend-file,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,share=on,size=1024M,id=ram-node1 -numa node,nodeid=1,cpus=1,memdev=ram-node1


memory regions:
        gpa = 0xC0000
        size = 2146697216
        ua = 0x2aaaaacc0000
        offset = 786432

        gpa = 0x0
        size = 655360
        ua = 0x2aaaaac00000
        offset = 0

hugepage:
cat /proc/pidof qemu/maps
2aaaaac00000-2aaaeac00000 rw-s 00000000 00:18 10357788                   /dev/hugepages/libvirt/qemu/qemu_back_mem._objects_ram-node0.MvcPyi (deleted)
2aaaeac00000-2aab2ac00000 rw-s 00000000 00:18 10357789                   /dev/hugepages/libvirt/qemu/qemu_back_mem._objects_ram-node1.tjAVin (deleted)

The memory size of each region is not match to the size of each hugepage file,is this ok?How does vhost-user to mmap all the hugepage?


More information about the dev mailing list