[dpdk-dev] [PATCH] vhost: fix qemu shutdown issue

Ouyang Changchun changchun.ouyang at intel.com
Thu Aug 20 06:01:10 CEST 2015


This patch originates from the patch:
[dpdk-dev] [PATCH 1/2] Patch for Qemu wrapper for US-VHost to ensure Qemu process ends when VM is shutdown
http://dpdk.org/ml/archives/dev/2014-June/003606.html

Aslo update the vhost sample guide doc.

Signed-off-by: Claire Murphy <claire.k.murphy at intel.com>
Signed-off-by: Changchun Ouyang <changchun.ouyang at intel.com>
---
 doc/guides/sample_app_ug/vhost.rst    |  9 ---------
 lib/librte_vhost/libvirt/qemu-wrap.py | 29 +++++++++++++++++++++++++----
 2 files changed, 25 insertions(+), 13 deletions(-)

diff --git a/doc/guides/sample_app_ug/vhost.rst b/doc/guides/sample_app_ug/vhost.rst
index 730b9da..743908d 100644
--- a/doc/guides/sample_app_ug/vhost.rst
+++ b/doc/guides/sample_app_ug/vhost.rst
@@ -717,15 +717,6 @@ Common Issues
     needs access to the shared memory from the guest to receive and transmit packets. It is important to make sure
     the QEMU version supports shared memory mapping.
 
-*   Issues with ``virsh destroy`` not destroying the VM:
-
-    Using libvirt ``virsh create`` the ``qemu-wrap.py`` spawns a new process to run ``qemu-kvm``. This impacts the behavior
-    of ``virsh destroy`` which kills the process running ``qemu-wrap.py`` without actually destroying the VM (it leaves
-    the ``qemu-kvm`` process running):
-
-    This following patch should fix this issue:
-        http://dpdk.org/ml/archives/dev/2014-June/003607.html
-
 *   In an Ubuntu environment, QEMU fails to start a new guest normally with user space VHOST due to not being able
     to allocate huge pages for the new guest:
 
diff --git a/lib/librte_vhost/libvirt/qemu-wrap.py b/lib/librte_vhost/libvirt/qemu-wrap.py
index 5096011..30a0d50 100755
--- a/lib/librte_vhost/libvirt/qemu-wrap.py
+++ b/lib/librte_vhost/libvirt/qemu-wrap.py
@@ -76,6 +76,7 @@
 #                "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
 #                "/dev/rtc", "/dev/hpet", "/dev/net/tun",
 #                "/dev/<devbase-name>-<index>",
+#                "/dev/hugepages",
 #            ]
 #
 #   4.b) Disable SELinux or set to permissive mode
@@ -161,6 +162,8 @@ hugetlbfs_dir = ""
 #############################################
 
 import sys, os, subprocess
+import time
+import signal
 
 
 #List of open userspace vhost file descriptors
@@ -174,6 +177,18 @@ vhost_flags = [ "csum=off",
                 "guest_ecn=off"
               ]
 
+#String of the path to the Qemu process pid
+qemu_pid = "/tmp/%d-qemu.pid" % os.getpid()
+
+#############################################
+# Signal haldler to kill Qemu subprocess
+#############################################
+def kill_qemu_process(signum, stack):
+    pidfile = open(qemu_pid, 'r')
+    pid = int(pidfile.read())
+    os.killpg(pid, signal.SIGTERM)
+    pidfile.close()
+
 
 #############################################
 # Find the system hugefile mount point.
@@ -280,7 +295,7 @@ def main():
     while (num < num_cmd_args):
         arg = sys.argv[num]
 
-		#Check netdev +1 parameter for vhostfd
+	#Check netdev +1 parameter for vhostfd
         if arg == '-netdev':
             num_vhost_devs = len(fd_list)
             new_args.append(arg)
@@ -333,7 +348,6 @@ def main():
         emul_call += mp
         emul_call += " "
 
-
     #add user options
     for opt in emul_opts_user:
         emul_call += opt
@@ -353,14 +367,21 @@ def main():
         emul_call+=str(arg)
         emul_call+= " "
 
+    emul_call += "-pidfile %s " % qemu_pid
     #Call QEMU
-    subprocess.call(emul_call, shell=True)
+    process = subprocess.Popen(emul_call, shell=True, preexec_fn=os.setsid)
+
+    for sig in [signal.SIGTERM, signal.SIGINT, signal.SIGHUP, signal.SIGQUIT]:
+        signal.signal(sig, kill_qemu_process)
 
+    process.wait()
 
     #Close usvhost files
     for fd in fd_list:
         os.close(fd)
-
+    #Cleanup temporary files
+    if os.access(qemu_pid, os.F_OK):
+        os.remove(qemu_pid)
 
 if __name__ == "__main__":
     main()
-- 
1.8.4.2



More information about the dev mailing list