[dpdk-dev] [PATCH v3 0/8] vhost-user live migration support

Yuanhan Liu yuanhan.liu at linux.intel.com
Fri Jan 29 05:57:55 CET 2016


This patch set adds the vhost-user live migration support.

The major task behind that is to log pages we touched during
live migration, including used vring and desc buffer. So, this
patch set is basically about adding vhost log support, and
using it.

Another important thing is that you need notify the switches
about the VM location change after migration is done. GUEST_ANNOUNCE
feature is for that, which sends an GARP message after migration.
For older kernel (<= v3.4) without GUEST_ANNOUNCE support,
we construct and broadcast a RARP message, with the mac address
from VHOST_USER_SEND_RARP payload.

Patchset
========
- Patch 1 handles VHOST_USER_SET_LOG_BASE, which tells us where
  the dirty memory bitmap is.
    
- Patch 2 introduces a vhost_log_write() helper function to log
  pages we are gonna change.

- Patch 3 logs changes we made to used vring.

- Patch 4 logs changes we made to vring desc buffer.

- Patch 5 and 7 add some feature bits related to live migration.

- patch 6 does the RARP construction and broadcast job.


A simple test guide (on same host)
==================================

The following test is based on OVS + DPDK (check [0] for
how to setup OVS + DPDK):

    [0]: http://wiki.qemu.org/Features/vhost-user-ovs-dpdk

Here is the rough test guide:

1. start ovs-vswitchd

2. Add two ovs vhost-user port, say vhost0 and vhost1

3. Start a VM1 to connect to vhost0. Here is my example:

   $ $QEMU -enable-kvm -m 1024 -smp 4 \
       -chardev socket,id=char0,path=/var/run/openvswitch/vhost0  \
       -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
       -device virtio-net-pci,netdev=mynet1,mac=52:54:00:12:34:58 \
       -object memory-backend-file,id=mem,size=1024M,mem-path=$HOME/hugetlbfs,share=on \
       -numa node,memdev=mem -mem-prealloc \
       -kernel $HOME/iso/vmlinuz -append "root=/dev/sda1" \
       -hda fc-19-i386.img \
       -monitor telnet::3333,server,nowait -curses

4. run "ping $host" inside VM1

5. Start VM2 to connect to vhost0, and marking it as the target
   of live migration (by adding -incoming tcp:0:4444 option)

   $ $QEMU -enable-kvm -m 1024 -smp 4 \
       -chardev socket,id=char0,path=/var/run/openvswitch/vhost1  \
       -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
       -device virtio-net-pci,netdev=mynet1,mac=52:54:00:12:34:58 \
       -object memory-backend-file,id=mem,size=1024M,mem-path=$HOME/hugetlbfs,share=on \
       -numa node,memdev=mem -mem-prealloc \
       -kernel $HOME/iso/vmlinuz -append "root=/dev/sda1" \
       -hda fc-19-i386.img \
       -monitor telnet::3334,server,nowait -curses \
       -incoming tcp:0:4444 

6. connect to VM1 monitor, and start migration:

   > migrate tcp:0:4444

7. After a while, you will find that VM1 has been migrated to VM2,
   and the "ping" command continues running, perfectly.




---
Yuanhan Liu (8):
  vhost: handle VHOST_USER_SET_LOG_BASE request
  vhost: introduce vhost_log_write
  vhost: log used vring changes
  vhost: log vring desc buffer changes
  vhost: claim that we support GUEST_ANNOUNCE feature
  vhost: handle VHOST_USER_SEND_RARP request
  vhost: enable log_shmfd protocol feature
  vhost: remove duplicate header include

 doc/guides/rel_notes/release_2_3.rst          |   2 +
 lib/librte_vhost/rte_virtio_net.h             |   9 +-
 lib/librte_vhost/vhost_rxtx.c                 | 114 +++++++++++++----
 lib/librte_vhost/vhost_user/vhost-net-user.c  |  11 +-
 lib/librte_vhost/vhost_user/vhost-net-user.h  |   7 ++
 lib/librte_vhost/vhost_user/virtio-net-user.c | 174 +++++++++++++++++++++++++-
 lib/librte_vhost/vhost_user/virtio-net-user.h |   8 +-
 lib/librte_vhost/virtio-net.c                 |   5 +
 8 files changed, 299 insertions(+), 31 deletions(-)

-- 
1.9.0



More information about the dev mailing list