[dpdk-dev] [PATCH v2 0/6] vhost-user live migration support
Pavel Fedin
p.fedin at samsung.com
Mon Dec 21 09:17:03 CET 2015
Works fine.
Tested-by: Pavel Fedin <p.fedin at samsung.com>
Kind regards,
Pavel Fedin
Expert Engineer
Samsung Electronics Research center Russia
> -----Original Message-----
> From: Yuanhan Liu [mailto:yuanhan.liu at linux.intel.com]
> Sent: Thursday, December 17, 2015 6:12 AM
> To: dev at dpdk.org
> Cc: huawei.xie at intel.com; Michael S. Tsirkin; Victor Kaplansky; Iremonger Bernard; Pavel
> Fedin; Peter Xu; Yuanhan Liu; Chen Zhihui; Yang Maggie
> Subject: [PATCH v2 0/6] vhost-user live migration support
>
> This patch set adds the vhost-user live migration support.
>
> The major task behind that is to log pages we touched during
> live migration, including used vring and desc buffer. So, this
> patch set is basically about adding vhost log support, and
> using it.
>
> Patchset
> ========
> - Patch 1 handles VHOST_USER_SET_LOG_BASE, which tells us where
> the dirty memory bitmap is.
>
> - Patch 2 introduces a vhost_log_write() helper function to log
> pages we are gonna change.
>
> - Patch 3 logs changes we made to used vring.
>
> - Patch 4 logs changes we made to vring desc buffer.
>
> - Patch 5 and 6 add some feature bits related to live migration.
>
>
> A simple test guide (on same host)
> ==================================
>
> The following test is based on OVS + DPDK (check [0] for
> how to setup OVS + DPDK):
>
> [0]: http://wiki.qemu.org/Features/vhost-user-ovs-dpdk
>
> Here is the rough test guide:
>
> 1. start ovs-vswitchd
>
> 2. Add two ovs vhost-user port, say vhost0 and vhost1
>
> 3. Start a VM1 to connect to vhost0. Here is my example:
>
> $ $QEMU -enable-kvm -m 1024 -smp 4 \
> -chardev socket,id=char0,path=/var/run/openvswitch/vhost0 \
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> -device virtio-net-pci,netdev=mynet1,mac=52:54:00:12:34:58 \
> -object memory-backend-file,id=mem,size=1024M,mem-path=$HOME/hugetlbfs,share=on \
> -numa node,memdev=mem -mem-prealloc \
> -kernel $HOME/iso/vmlinuz -append "root=/dev/sda1" \
> -hda fc-19-i386.img \
> -monitor telnet::3333,server,nowait -curses
>
> 4. run "ping $host" inside VM1
>
> 5. Start VM2 to connect to vhost0, and marking it as the target
> of live migration (by adding -incoming tcp:0:4444 option)
>
> $ $QEMU -enable-kvm -m 1024 -smp 4 \
> -chardev socket,id=char0,path=/var/run/openvswitch/vhost1 \
> -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> -device virtio-net-pci,netdev=mynet1,mac=52:54:00:12:34:58 \
> -object memory-backend-file,id=mem,size=1024M,mem-path=$HOME/hugetlbfs,share=on \
> -numa node,memdev=mem -mem-prealloc \
> -kernel $HOME/iso/vmlinuz -append "root=/dev/sda1" \
> -hda fc-19-i386.img \
> -monitor telnet::3334,server,nowait -curses \
> -incoming tcp:0:4444
>
> 6. connect to VM1 monitor, and start migration:
>
> > migrate tcp:0:4444
>
> 7. After a while, you will find that VM1 has been migrated to VM2,
> and the "ping" command continues running, perfectly.
>
>
> Cc: Chen Zhihui <zhihui.chen at intel.com>
> Cc: Yang Maggie <maggie.yang at intel.com>
> ---
> Yuanhan Liu (6):
> vhost: handle VHOST_USER_SET_LOG_BASE request
> vhost: introduce vhost_log_write
> vhost: log used vring changes
> vhost: log vring desc buffer changes
> vhost: claim that we support GUEST_ANNOUNCE feature
> vhost: enable log_shmfd protocol feature
>
> lib/librte_vhost/rte_virtio_net.h | 36 ++++++++++-
> lib/librte_vhost/vhost_rxtx.c | 88 +++++++++++++++++++--------
> lib/librte_vhost/vhost_user/vhost-net-user.c | 7 ++-
> lib/librte_vhost/vhost_user/vhost-net-user.h | 6 ++
> lib/librte_vhost/vhost_user/virtio-net-user.c | 48 +++++++++++++++
> lib/librte_vhost/vhost_user/virtio-net-user.h | 5 +-
> lib/librte_vhost/virtio-net.c | 5 ++
> 7 files changed, 165 insertions(+), 30 deletions(-)
>
> --
> 1.9.0
More information about the dev
mailing list