[dts] [PATCH V2] add multi queue qemu test suite

lihong lihongx.ma at intel.com
Thu Jan 11 17:00:38 CET 2018


From: malihong <lihongx.ma at intel.com>

Signed-off-by: lihong <lihongx.ma at intel.com>
---
 test_plans/vhost_multi_queue_qemu_test_plan.rst | 192 +++++++++++++++++
 tests/TestSuite_vhost_multi_queue_qemu.py       | 272 ++++++++++++++++++++++++
 2 files changed, 464 insertions(+)
 create mode 100644 test_plans/vhost_multi_queue_qemu_test_plan.rst
 create mode 100644 tests/TestSuite_vhost_multi_queue_qemu.py

diff --git a/test_plans/vhost_multi_queue_qemu_test_plan.rst b/test_plans/vhost_multi_queue_qemu_test_plan.rst
new file mode 100644
index 0000000..7a05d1a
--- /dev/null
+++ b/test_plans/vhost_multi_queue_qemu_test_plan.rst
@@ -0,0 +1,192 @@
+.. Copyright (c) <2016>, Intel Corporation
+   All rights reserved.
+
+   Redistribution and use in source and binary forms, with or without
+   modification, are permitted provided that the following conditions
+   are met:
+
+   - Redistributions of source code must retain the above copyright
+     notice, this list of conditions and the following disclaimer.
+
+   - Redistributions in binary form must reproduce the above copyright
+     notice, this list of conditions and the following disclaimer in
+     the documentation and/or other materials provided with the
+     distribution.
+
+   - Neither the name of Intel Corporation nor the names of its
+     contributors may be used to endorse or promote products derived
+     from this software without specific prior written permission.
+
+   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+   OF THE POSSIBILITY OF SUCH DAMAGE.
+
+==========================================
+Vhost/Virtio multiple queue qemu test plan
+==========================================
+
+This test plan will cover the vhost/virtio-pmd multiple queue qemu test case.
+Will use testpmd as the test application. 
+
+Test Case1: DPDK vhost pmd/virtio-pmd PVP 2queues mergeable performance
+=======================================================================
+
+flow: 
+TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
+
+1. Bind one port to igb_uio, then launch testpmd by below command: 
+    rm -rf vhost-net*
+    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
+    --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
+    -i --nb-cores=2 --rxq=2 --txq=2
+    testpmd>set fwd mac
+    testpmd>start
+
+2. Launch VM1, set queues=2, vectors=2*queues+2, mq=on::
+
+    qemu-system-x86_64 -name vm1 -cpu host -enable-kvm \
+    -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem \
+    -mem-prealloc -smp cores=3,sockets=1 -drive file=/home/osimg/ubuntu16.img \
+    -chardev socket,id=char0,path=./vhost-net \
+    -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \
+    -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=6 \
+    -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 \
+    -vnc :2 -daemonize
+
+3. On VM1, bind virtio net to igb_uio and run testpmd ::
+    ./testpmd -c 0x07 -n 3 -- -i \
+    --rxq=2 --txq=2 --txqflags=0xf01 --rss-ip --nb-cores=2
+    testpmd>set fwd mac
+    testpmd>start
+
+4. Check the performance for the 2core/2queue for vhost/virtio. 
+
+Test Case2: DPDK PVP virtio-pmd queue number dynamic change performance check
+=============================================================================
+
+This case is to check if the virtio-pmd can work well when queue number 
+dynamic change. In this case, set both vhost-pmd and virtio-pmd max queue 
+number as 2 queues. Launch vhost-pmd with 2 queues. Launch virtio-pmd with 
+1 queue first then in testpmd, change the number to 2 queues. Expect no crash 
+happened. And after the queue number changes, the virtio-pmd can use 2 queues 
+to RX/TX packets normally. 
+
+
+flow: 
+TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
+
+1. Bind one port to igb_uio, then launch testpmd by below command, 
+   ensure the vhost using 2 queues: 
+    rm -rf vhost-net*
+    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
+    --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
+    -i --nb-cores=2 --rxq=2 --txq=2
+    testpmd>set fwd mac
+    testpmd>start
+
+2. Launch VM1, set queues=2, vectors=2*queues+2, mq=on::
+
+    qemu-system-x86_64 -name vm1 -cpu host -enable-kvm \
+    -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem \
+    -mem-prealloc -smp cores=3,sockets=1 -drive file=/home/osimg/ubuntu16.img \
+    -chardev socket,id=char0,path=./vhost-net \
+    -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \
+    -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=6 \
+    -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 \
+    -vnc :2 -daemonize
+
+3. On VM1, bind virtio net to igb_uio and run testpmd, 
+   using one queue for testing at first  ::
+ 
+    ./testpmd -c 0x7 -n 3 -- -i --rxq=1 --txq=1 --txqflags=0xf01 \
+    --rss-ip --nb-cores=1
+    testpmd>set fwd mac
+    testpmd>start
+
+4. On VM1, dynamic change queue numbers at virtio-pmd side from 1 queue to 2 
+   queues, then ensure virtio-pmd RX/TX can work normally.
+   The expected behavior is that both queues can RX/TX traffic::
+   
+    testpmd>stop
+    testpmd>port stop all
+    testpmd>port config all rxq 2
+    testpmd>port config all txq 2
+    testpmd>port start all
+    testpmd>start
+    
+    after 10 seconds
+    testpmd>stop
+    then check each queue's RX/TX packet numbers. 
+
+5. There should be no core dump or unexpected crash happened during the queue
+   number changes. 
+
+
+Test Case3: DPDK PVP Vhost-pmd queue number dynamic change performance check
+============================================================================
+
+This case is to check if the vhost-pmd queue number dynamic change can work
+well. In this case, set vhost-pmd and virtio-pmd max queue number as 2. 
+Launch vhost-pmd with 1 queue first then in testpmd, change the queue number
+to 2 queues. At virtio-pmd side, launch it with 2 queues. Expect no crash 
+happened. After the dynamical changes, vhost-pmd can use 2 queues to RX/TX 
+packets. 
+
+
+flow: 
+TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
+
+1. Bind one port to igb_uio, then launch testpmd by below command, 
+   ensure the vhost using 2 queues: 
+    rm -rf vhost-net*
+    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
+    --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
+    -i --nb-cores=1 --rxq=1 --txq=1
+    testpmd>set fwd mac
+    testpmd>start
+
+2. Launch VM1, set queues=2, vectors=2*queues+2, mq=on::
+
+    qemu-system-x86_64 -name vm1 -cpu host -enable-kvm \
+    -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem \
+    -mem-prealloc -smp cores=3,sockets=1 -drive file=/home/osimg/ubuntu16.img \
+    -chardev socket,id=char0,path=./vhost-net \
+    -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \
+    -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=6 \
+    -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 \
+    -vnc :2 -daemonize
+
+3. On VM1, bind virtio net to igb_uio and run testpmd, 
+   using one queue for testing at first  ::
+ 
+    ./testpmd -c 0x7 -n 4 -- -i --rxq=2 --txq=2 \
+    --txqflags=0xf01 --rss-ip --nb-cores=2
+    testpmd>set fwd mac
+    testpmd>start
+
+4. On host, dynamic change queue numbers at vhost-pmd side from 1 queue to 2 
+   queues, then ensure vhost-pmd RX/TX can work normally.
+   The expected behavior is that both queues can RX/TX traffic::
+   
+    testpmd>stop
+    testpmd>port stop all
+    testpmd>port config all rxq 2
+    testpmd>port config all txq 2
+    testpmd>port start all
+    testpmd>start
+    
+    after 10 seconds
+    testpmd>stop
+    then check each queue's RX/TX packet numbers. 
+
+5. There should be no core dump or unexpected crash happened during the 
+   queue number changes. 
diff --git a/tests/TestSuite_vhost_multi_queue_qemu.py b/tests/TestSuite_vhost_multi_queue_qemu.py
new file mode 100644
index 0000000..258c991
--- /dev/null
+++ b/tests/TestSuite_vhost_multi_queue_qemu.py
@@ -0,0 +1,272 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+#   * Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+#   * Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in
+#     the documentation and/or other materials provided with the
+#     distribution.
+#   * Neither the name of Intel Corporation nor the names of its
+#     contributors may be used to endorse or promote products derived
+#     from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""
+DPDK Test suite.
+
+Vhost PVP performance using Qemu test suite.
+"""
+import os
+import re
+import time
+import utils
+from scapy.utils import wrpcap, rdpcap
+from test_case import TestCase
+from exception import VerifyFailure
+from settings import HEADER_SIZE
+from etgen import IxiaPacketGenerator
+from qemu_kvm import QEMUKvm
+
+
+class TestVhostUserOneCopyOneVm(TestCase):
+
+    def set_up_all(self):
+        # Get and verify the ports
+        self.dut_ports = self.dut.get_ports()
+        self.verify(len(self.dut_ports) >= 1, "Insufficient ports for testing")
+
+        # Get the port's socket
+        self.pf = self.dut_ports[0]
+        netdev = self.dut.ports_info[self.pf]['port']
+        self.socket = netdev.get_nic_socket()
+        self.cores = self.dut.get_core_list("1S/3C/1T", socket=self.socket)
+
+        self.queue_number = 2
+
+        # Using file to save the vhost sample output since in jumboframe case,
+        # there will be lots of output
+
+        self.virtio1 = "eth1"
+        self.virtio1_mac = "52:54:00:00:00:01"
+        self.src1 = "192.168.4.1"
+        self.dst1 = "192.168.3.1"
+        self.vm_dut = None
+
+        self.number_of_ports = 1
+        self.header_row = ["FrameSize(B)", "Throughput(Mpps)", "LineRate(%)", "Cycle"]
+        self.memory_channel = 4
+        if self.dut.cores[len(self.dut.cores)-1]['socket'] == '0':
+            self.socket_mem = '1024'
+        else:
+            self.socket_mem = '1024,1024'
+
+    def set_up(self):
+        #
+        # Run before each test case.
+        #
+        self.dut.send_expect("rm -rf ./vhost.out", "#")
+        self.dut.send_expect("rm -rf ./vhost-net*", "#")
+        self.dut.send_expect("killall -s INT vhost-switch", "#")
+
+        self.frame_sizes = [64, 128, 256, 512, 1024, 1500]
+        self.vm_testpmd_vector = self.target + "/app/testpmd -c 0x07 -n 3" + \
+                                 " -- -i --txqflags=0xf01 " + \
+                                 " --rxq=%d --txq=%d --rss-ip --nb-cores=2" % (self.queue_number, self.queue_number)
+        self.vm_testpmd_normal = self.target + "/app/testpmd -c 0x07 -n 3" + \
+                                 " -- -i --txqflags=0xf00 " + \
+                                 " --rxq=%d --txq=%d --rss-ip --nb-cores=2" % (self.queue_number, self.queue_number)
+
+    def launch_testpmd(self, queue=2):
+        #
+        # Launch the vhost sample with different parameters
+        #
+        self.testcmd = "./x86_64-native-linuxapp-gcc/app/testpmd -c %s -n %d --socket-mem %s" + \
+                       " --vdev 'net_vhost0,iface=vhost-net,queues=%d' -- -i --rxq=%d --txq=%d --nb-cores=2"
+        self.coremask = utils.create_mask(self.cores)
+        self.testcmd_start = self.testcmd % (self.coremask, self.memory_channel, self.socket_mem, queue, queue, queue)
+
+        self.vhost_user = self.dut.new_session(suite="user")
+
+        self.vhost_user.send_expect("cd /root/dpdk", "#", 120)
+        self.vhost_user.send_expect(self.testcmd_start, "testpmd> ", 120)
+        self.vhost_user.send_expect("set fwd mac", "testpmd> ", 120)
+        self.vhost_user.send_expect("start", "testpmd> ", 120)
+
+    def start_onevm(self, path="", modem=0):
+        #
+        # Start One VM with one virtio device
+        #
+        self.vm = QEMUKvm(self.dut, 'vm0', 'vhost_sample')
+        if(path != ""):
+            self.vm.set_qemu_emulator(path)
+        vm_params = {}
+        vm_params['driver'] = 'vhost-user'
+        vm_params['opt_path'] = './vhost-net'
+        vm_params['opt_mac'] = self.virtio1_mac
+        vm_params['opt_queue'] = self.queue_number
+        vm_params['opt_settings'] = 'mrg_rxbuf=on,mq=on,vectors=6'
+        if(modem == 1):
+            vm_params['opt_settings'] = 'disable-modern=false'
+        self.vm.set_vm_device(**vm_params)
+
+        try:
+            self.vm_dut = self.vm.start()
+            if self.vm_dut is None:
+                raise Exception("Set up VM ENV failed")
+        except Exception as e:
+            self.logger.error("ERROR: Failure for %s" % str(e))
+
+        return True
+
+    def vm_testpmd_start(self):
+        #
+        # Start testpmd in vm
+        #
+        if self.vm_dut is not None:
+            self.vm_dut.send_expect(self.vm_testpmd_vector, "testpmd>", 20)
+            self.vm_dut.send_expect("set fwd mac", "testpmd>", 20)
+            self.vm_dut.send_expect("start tx_first", "testpmd>")
+
+    def send_verify(self, case, frame_sizes, tag="Performance"):
+        self.result_table_create(self.header_row)
+        destination_mac = "52:54:00:00:00:01"
+        for frame_size in frame_sizes:
+            info = "Running test %s, and %d frame size." % (case, frame_size)
+            self.logger.info(info)
+            payload_size = frame_size - HEADER_SIZE['eth'] - HEADER_SIZE['ip'] - HEADER_SIZE['udp']
+            tgenInput = []
+
+            self.tester.scapy_append('a= [Ether(dst="%s")/IP(dst="1.1.1.1")/UDP()/("X"*%d)]' % (destination_mac, payload_size))
+            self.tester.scapy_append('b= [Ether(dst="%s")/IP(dst="1.1.1.20")/UDP()/("X"*%d)]' % (destination_mac, payload_size))
+            self.tester.scapy_append('c= [Ether(dst="%s")/IP(dst="1.1.1.7")/UDP()/("X"*%d)]' % (destination_mac, payload_size))
+            self.tester.scapy_append('d= [Ether(dst="%s")/IP(dst="1.1.1.8")/UDP()/("X"*%d)]' % (destination_mac, payload_size))
+            self.tester.scapy_append('a= a + b + c + d')
+            self.tester.scapy_append('wrpcap("multiqueue_2.pcap", a)')
+            self.tester.scapy_execute()
+
+            port = self.tester.get_local_port(self.pf)
+            tgenInput.append((port, port, "multiqueue_2.pcap"))
+
+            _, pps = self.tester.traffic_generator_throughput(tgenInput, delay=30)
+            Mpps = pps / 1000000.0
+            pct = Mpps * 100 / float(self.wirespeed(self.nic, frame_size,
+                                     self.number_of_ports))
+            data_row = [frame_size, str(Mpps), str(pct), tag]
+            self.result_table_add(data_row)
+            #self.verify(Mpps != 0, "The recive data of pak-size: %d is 0")
+        self.result_table_print()
+
+    def test_perf_pvp_multiqemu_mergeable_pmd(self):
+        #
+        # Test the performance for mergeable path
+        #
+        self.launch_testpmd()
+        self.start_onevm()
+        self.vm_dut.send_expect(self.vm_testpmd_vector, "testpmd>", 20)
+        self.vm_dut.send_expect("set fwd mac", "testpmd>", 20)
+        self.vm_dut.send_expect("start", "testpmd>")
+
+        self.vhost_user.send_expect("stop", "testpmd> ", 120)
+        self.vhost_user.send_expect("start", "testpmd> ", 120)
+        time.sleep(5)
+        self.send_verify(self.running_case, self.frame_sizes, "Virtio 0.95 Mergeable Multiqueue Performance")
+        self.vm_dut.kill_all()
+
+    def test_perf_dynamic_change_virtio_queue_size(self):
+        #
+        # Test the performance for change vritio queue size
+        #
+        self.launch_testpmd()
+        self.start_onevm()
+        self.vm_testpmd_queue_1 = self.target + "/app/testpmd -c 0x07 -n 3" + \
+                                  " -- -i --txqflags=0xf01 " + \
+                                  " --rxq=1 --txq=1 --rss-ip --nb-cores=1"
+
+        self.vm_dut.send_expect(self.vm_testpmd_queue_1, "testpmd>", 20)
+        self.vm_dut.send_expect("set fwd mac", "testpmd>", 20)
+        self.vm_dut.send_expect("start", "testpmd>")
+
+        self.send_verify(self.running_case, self.frame_sizes, "Performance before change virtio queue size")
+
+        self.vm_dut.send_expect("stop", "testpmd>", 20)
+        self.vm_dut.send_expect("port stop all", "testpmd>")
+        self.vm_dut.send_expect("port config all rxq 2", "testpmd>", 20)
+        self.vm_dut.send_expect("port config all txq 2", "testpmd>")
+        self.vm_dut.send_expect("port start all", "testpmd>", 20)
+        self.vm_dut.send_expect("start", "testpmd>")
+
+        self.vhost_user.send_expect("stop", "testpmd> ", 120)
+        self.vhost_user.send_expect("start", "testpmd> ", 120)
+        time.sleep(5)
+        self.send_verify(self.running_case, self.frame_sizes, "Performance after change virtio queue size")
+        self.vm_dut.kill_all()
+        self.vhost_user.send_expect("quit", "# ", 120)
+
+    def test_perf_dynamic_change_vhost_queue_size(self):
+        #
+        # Test the performance for change vhost queue size
+        #
+        self.queue_number = 2
+        self.testcmd = "./x86_64-native-linuxapp-gcc/app/testpmd -c %s -n %d --socket-mem %s" + \
+                       " --vdev 'net_vhost0,iface=vhost-net,queues=2' -- -i --rxq=1 --txq=1 --nb-cores=1"
+        self.coremask = utils.create_mask(self.cores)
+        self.testcmd_start = self.testcmd % (self.coremask, self.memory_channel, self.socket_mem)
+
+        self.vhost_user = self.dut.new_session(suite="user")
+
+        self.vhost_user.send_expect("cd /root/dpdk", "#", 120)
+        self.vhost_user.send_expect(self.testcmd_start, "testpmd> ", 120)
+        self.vhost_user.send_expect("set fwd mac", "testpmd> ", 120)
+        self.vhost_user.send_expect("start", "testpmd> ", 120)
+
+        self.start_onevm()
+
+        self.vm_dut.send_expect(self.vm_testpmd_vector, "testpmd>", 20)
+        self.vm_dut.send_expect("set fwd mac", "testpmd>", 20)
+        self.vm_dut.send_expect("start", "testpmd>")
+
+        self.send_verify(self.running_case, self.frame_sizes, "Performance before change vhost queue size")
+
+        self.vhost_user.send_expect("stop", "testpmd>", 20)
+        self.vhost_user.send_expect("port stop all", "testpmd>")
+        self.vhost_user.send_expect("port config all rxq 2", "testpmd>", 20)
+        self.vhost_user.send_expect("port config all txq 2", "testpmd>")
+        self.vhost_user.send_expect("port start all", "testpmd>", 20)
+        self.vhost_user.send_expect("start", "testpmd>")
+
+        time.sleep(5)
+        self.send_verify(self.running_case, self.frame_sizes, "Performance after change vhost queue size")
+        self.vm_dut.kill_all()
+        self.vhost_user.send_expect("quit", "# ", 120)
+
+    def tear_down(self):
+        #
+        # Run after each test case.
+        # Clear vhost-switch and qemu to avoid blocking the following TCs
+        #
+        self.vm.stop()
+        time.sleep(2)
+
+    def tear_down_all(self):
+        """
+        Run after each test suite.
+        """
+        pass
-- 
2.7.4



More information about the dts mailing list