[dts] [PATCH V1] test_plans: fix build warnings

Tu, Lijuan lijuan.tu at intel.com
Tue Aug 6 11:00:12 CEST 2019


Applied, thanks

> -----Original Message-----
> From: dts [mailto:dts-bounces at dpdk.org] On Behalf Of Wenjie Li
> Sent: Monday, July 22, 2019 3:09 PM
> To: dts at dpdk.org
> Cc: Li, WenjieX A <wenjiex.a.li at intel.com>
> Subject: [dts] [PATCH V1] test_plans: fix build warnings
> 
> fix build warnings
> 
> Signed-off-by: Wenjie Li <wenjiex.a.li at intel.com>
> ---
>  test_plans/index.rst                          | 14 +++-
>  ...back_virtio_user_server_mode_test_plan.rst | 84 +++++++++----------
> test_plans/nic_single_core_perf_test_plan.rst | 18 ++--
>  .../pvp_vhost_user_reconnect_test_plan.rst    |  1 +
>  test_plans/pvp_virtio_bonding_test_plan.rst   |  4 +-
>  5 files changed, 69 insertions(+), 52 deletions(-)
> 
> diff --git a/test_plans/index.rst b/test_plans/index.rst index
> 52d4e55..d0ebeb5 100644
> --- a/test_plans/index.rst
> +++ b/test_plans/index.rst
> @@ -81,7 +81,6 @@ The following are the test plans for the DPDK DTS
> automated test system.
>      l3fwdacl_test_plan
>      link_flowctrl_test_plan
>      link_status_interrupt_test_plan
> -    loopback_multi_paths_port_restart_performance_test_plan
>      loopback_multi_paths_port_restart_test_plan
>      loopback_virtio_user_server_mode_test_plan
>      mac_filter_test_plan
> @@ -174,16 +173,22 @@ The following are the test plans for the DPDK DTS
> automated test system.
>      vhost_dequeue_zero_copy_test_plan
>      vxlan_gpe_support_in_i40e_test_plan
>      pvp_diff_qemu_version_test_plan
> -    pvp_qemu_zero_copy_test_plan
>      pvp_share_lib_test_plan
>      pvp_vhost_user_built_in_net_driver_test_plan
>      pvp_virtio_user_2M_hugepages_test_plan
>      pvp_virtio_user_multi_queues_test_plan
> -    vhost_gro_test_plan
>      virtio_unit_cryptodev_func_test_plan
>      virtio_user_for_container_networking_test_plan
>      eventdev_perf_test_plan
>      eventdev_pipeline_perf_test_plan
> +    pvp_qemu_multi_paths_port_restart_test_plan
> +    pvp_vhost_user_reconnect_test_plan
> +    pvp_virtio_bonding_test_plan
> +    pvp_virtio_user_4k_pages_test_plan
> +    vdev_primary_secondary_test_plan
> +    vhost_1024_ethports_test_plan
> +    virtio_pvp_regression_test_plan
> +    virtio_user_as_exceptional_path
> 
>      unit_tests_cmdline_test_plan
>      unit_tests_crc_test_plan
> @@ -217,3 +222,6 @@ The following are the test plans for the DPDK DTS
> automated test system.
>      efd_test_plan
>      example_build_test_plan
>      flow_classify_test_plan
> +    dpdk_hugetlbfs_mount_size_test_plan
> +    nic_single_core_perf_test_plan
> +    power_managerment_throughput_test_plan
> \ No newline at end of file
> diff --git a/test_plans/loopback_virtio_user_server_mode_test_plan.rst
> b/test_plans/loopback_virtio_user_server_mode_test_plan.rst
> index 45388f4..1dd17d1 100644
> --- a/test_plans/loopback_virtio_user_server_mode_test_plan.rst
> +++ b/test_plans/loopback_virtio_user_server_mode_test_plan.rst
> @@ -143,15 +143,15 @@ Test Case 3: loopback reconnect test with virtio 1.1
> mergeable path and server m
> 
>  10. Port restart at vhost side by below command and re-calculate the
> average throughput::
> 
> -    testpmd>stop
> -    testpmd>port stop 0
> -    testpmd>port start 0
> -    testpmd>start tx_first 32
> -    testpmd>show port stats all
> +      testpmd>stop
> +      testpmd>port stop 0
> +      testpmd>port start 0
> +      testpmd>start tx_first 32
> +      testpmd>show port stats all
> 
>  11. Check each RX/TX queue has packets::
> 
> -    testpmd>stop
> +      testpmd>stop
> 
>  Test Case 4: loopback reconnect test with virtio 1.1 normal path and server
> mode
> ================================================================
> ================
> @@ -215,15 +215,15 @@ Test Case 4: loopback reconnect test with virtio 1.1
> normal path and server mode
> 
>  10. Port restart at vhost side by below command and re-calculate the
> average throughput::
> 
> -    testpmd>stop
> -    testpmd>port stop 0
> -    testpmd>port start 0
> -    testpmd>start tx_first 32
> -    testpmd>show port stats all
> +      testpmd>stop
> +      testpmd>port stop 0
> +      testpmd>port start 0
> +      testpmd>start tx_first 32
> +      testpmd>show port stats all
> 
>  11. Check each RX/TX queue has packets::
> 
> -    testpmd>stop
> +      testpmd>stop
> 
>  Test Case 5: loopback reconnect test with virtio 1.0 mergeable path and
> server mode
> ================================================================
> ===================
> @@ -287,15 +287,15 @@ Test Case 5: loopback reconnect test with virtio 1.0
> mergeable path and server m
> 
>  10. Port restart at vhost side by below command and re-calculate the
> average throughput::
> 
> -    testpmd>stop
> -    testpmd>port stop 0
> -    testpmd>port start 0
> -    testpmd>start tx_first 32
> -    testpmd>show port stats all
> +      testpmd>stop
> +      testpmd>port stop 0
> +      testpmd>port start 0
> +      testpmd>start tx_first 32
> +      testpmd>show port stats all
> 
>  11. Check each RX/TX queue has packets::
> 
> -    testpmd>stop
> +      testpmd>stop
> 
>  Test Case 6: loopback reconnect test with virtio 1.0 inorder mergeable path
> and server mode
> ================================================================
> ===========================
> @@ -359,15 +359,15 @@ Test Case 6: loopback reconnect test with virtio 1.0
> inorder mergeable path and
> 
>  10. Port restart at vhost side by below command and re-calculate the
> average throughput::
> 
> -    testpmd>stop
> -    testpmd>port stop 0
> -    testpmd>port start 0
> -    testpmd>start tx_first 32
> -    testpmd>show port stats all
> +      testpmd>stop
> +      testpmd>port stop 0
> +      testpmd>port start 0
> +      testpmd>start tx_first 32
> +      testpmd>show port stats all
> 
>  11. Check each RX/TX queue has packets::
> 
> -    testpmd>stop
> +      testpmd>stop
> 
>  Test Case 7: loopback reconnect test with virtio 1.0 inorder no-mergeable
> path and server mode
> ================================================================
> ==============================
> @@ -431,15 +431,15 @@ Test Case 7: loopback reconnect test with virtio 1.0
> inorder no-mergeable path a
> 
>  10. Port restart at vhost side by below command and re-calculate the
> average throughput::
> 
> -    testpmd>stop
> -    testpmd>port stop 0
> -    testpmd>port start 0
> -    testpmd>start tx_first 32
> -    testpmd>show port stats all
> +      testpmd>stop
> +      testpmd>port stop 0
> +      testpmd>port start 0
> +      testpmd>start tx_first 32
> +      testpmd>show port stats all
> 
>  11. Check each RX/TX queue has packets::
> 
> -    testpmd>stop
> +      testpmd>stop
> 
>  Test Case 8: loopback reconnect test with virtio 1.0 normal path and server
> mode
> ================================================================
> ================
> @@ -503,15 +503,15 @@ Test Case 8: loopback reconnect test with virtio 1.0
> normal path and server mode
> 
>  10. Port restart at vhost side by below command and re-calculate the
> average throughput::
> 
> -    testpmd>stop
> -    testpmd>port stop 0
> -    testpmd>port start 0
> -    testpmd>start tx_first 32
> -    testpmd>show port stats all
> +      testpmd>stop
> +      testpmd>port stop 0
> +      testpmd>port start 0
> +      testpmd>start tx_first 32
> +      testpmd>show port stats all
> 
>  11. Check each RX/TX queue has packets::
> 
> -    testpmd>stop
> +      testpmd>stop
> 
>  Test Case 9: loopback reconnect test with virtio 1.0 vector_rx path and
> server mode
> ================================================================
> ===================
> @@ -575,12 +575,12 @@ Test Case 9: loopback reconnect test with virtio 1.0
> vector_rx path and server m
> 
>  10. Port restart at vhost side by below command and re-calculate the
> average throughput::
> 
> -    testpmd>stop
> -    testpmd>port stop 0
> -    testpmd>port start 0
> -    testpmd>start tx_first 32
> -    testpmd>show port stats all
> +      testpmd>stop
> +      testpmd>port stop 0
> +      testpmd>port start 0
> +      testpmd>start tx_first 32
> +      testpmd>show port stats all
> 
>  11. Check each RX/TX queue has packets::
> 
> -    testpmd>stop
> \ No newline at end of file
> +      testpmd>stop
> \ No newline at end of file
> diff --git a/test_plans/nic_single_core_perf_test_plan.rst
> b/test_plans/nic_single_core_perf_test_plan.rst
> index 428d5db..4157c31 100644
> --- a/test_plans/nic_single_core_perf_test_plan.rst
> +++ b/test_plans/nic_single_core_perf_test_plan.rst
> @@ -38,12 +38,14 @@ Prerequisites
>  =============
> 
>  1. Hardware:
> -    1) nic_single_core_perf test for FVL25G: two dual port FVL25G nics,
> +
> +    1.1) nic_single_core_perf test for FVL25G: two dual port FVL25G
> + nics,
>          all installed on the same socket, pick one port per nic
> -    3) nic_single_core_perf test for NNT10G : four 82599 nics,
> +    1.2) nic_single_core_perf test for NNT10G: four 82599 nics,
>          all installed on the same socket, pick one port per nic
> 
> -2. Software:
> +2. Software::
> +
>      dpdk: git clone http://dpdk.org/git/dpdk
>      scapy: http://www.secdev.org/projects/scapy/
>      dts (next branch): git clone http://dpdk.org/git/tools/dts, @@ -51,12
> +53,13 @@ Prerequisites
>      Trex code: http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz
>                 (to be run in stateless Layer 2 mode, see section in
>                  Getting Started Guide for more details)
> -    python-prettytable:
> +    python-prettytable:
>          apt install python-prettytable (for ubuntu os)
>          or dnf install python-prettytable (for fedora os).
> 
>  3. Connect all the selected nic ports to traffic generator(IXIA,TREX,
> -   PKTGEN) ports(TG ports).
> +   PKTGEN) ports(TG ports)::
> +
>      2 TG 25g ports for FVL25G ports
>      4 TG 10g ports for 4 NNT10G ports
> 
> @@ -86,19 +89,24 @@ Test Case : Single Core Performance Measurement
>  6) Result tables for different NICs:
> 
>     FVL25G:
> +
>     +------------+---------+-------------+---------+---------------------+
>     | Frame Size | TXD/RXD |  Throughput |   Rate  | Expected Throughput |
>     +------------+---------+-------------+---------+---------------------+
>     |     64     |   512   | xxxxxx Mpps |   xxx % |     xxx    Mpps     |
> +
> + +------------+---------+-------------+---------+---------------------+
>     |     64     |   2048  | xxxxxx Mpps |   xxx % |     xxx    Mpps     |
>     +------------+---------+-------------+---------+---------------------+
> 
>     NNT10G:
> +
>     +------------+---------+-------------+---------+---------------------+
>     | Frame Size | TXD/RXD |  Throughput |   Rate  | Expected Throughput |
>     +------------+---------+-------------+---------+---------------------+
>     |     64     |   128   | xxxxxx Mpps |   xxx % |       xxx  Mpps     |
> +
> + +------------+---------+-------------+---------+---------------------+
>     |     64     |   512   | xxxxxx Mpps |   xxx % |       xxx  Mpps     |
> +
> + +------------+---------+-------------+---------+---------------------+
>     |     64     |   2048  | xxxxxx Mpps |   xxx % |       xxx  Mpps     |
>     +------------+---------+-------------+---------+---------------------+
> 
> diff --git a/test_plans/pvp_vhost_user_reconnect_test_plan.rst
> b/test_plans/pvp_vhost_user_reconnect_test_plan.rst
> index a2ccdb1..9cc1ddc 100644
> --- a/test_plans/pvp_vhost_user_reconnect_test_plan.rst
> +++ b/test_plans/pvp_vhost_user_reconnect_test_plan.rst
> @@ -49,6 +49,7 @@ Vhost-user uses Unix domain sockets for passing
> messages. This means the DPDK vh
>    When DPDK vhost-user restarts from an normal or abnormal exit (such as a
> crash), the client mode allows DPDK to establish the connection again. Note
>    that QEMU version v2.7 or above is required for this reconnect feature.
>    Also, when DPDK vhost-user acts as the client, it will keep trying to
> reconnect to the server (QEMU) until it succeeds. This is useful in two cases:
> +
>      * When QEMU is not started yet.
>      * When QEMU restarts (for example due to a guest OS reboot).
> 
> diff --git a/test_plans/pvp_virtio_bonding_test_plan.rst
> b/test_plans/pvp_virtio_bonding_test_plan.rst
> index a90e7d3..c45b3f7 100644
> --- a/test_plans/pvp_virtio_bonding_test_plan.rst
> +++ b/test_plans/pvp_virtio_bonding_test_plan.rst
> @@ -50,7 +50,7 @@ Test case 1: vhost-user/virtio-pmd pvp bonding test
> with mode 0
> ===============================================================
>  Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 --> Vhost--> NIC--> TG
> 
> -1.  Bind one port to igb_uio,launch vhost by below command::
> +1. Bind one port to igb_uio,launch vhost by below command::
> 
>      ./testpmd -l 1-6 -n 4 --socket-mem 2048,2048 --legacy-mem --file-
> prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev
> 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev
> 'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-
> topology=chained --nb-cores=4 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -112,7 +112,7 @@ Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 -->
> Vhost--> NIC--> TG  Test case 2: vhost-user/virtio-pmd pvp bonding test with
> different mode from 1 to 6
> ================================================================
> ===================
> 
> -1.  Bind one port to igb_uio,launch vhost by below command::
> +1. Bind one port to igb_uio,launch vhost by below command::
> 
>      ./testpmd -l 1-6 -n 4 --socket-mem 2048,2048 --legacy-mem --file-
> prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev
> 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev
> 'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-
> topology=chained --nb-cores=4 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> --
> 2.17.2



More information about the dts mailing list