[dts] [PATCH V1 1/2] pmd_stacked_bonded: upload test plan

yufengx.mo at intel.com yufengx.mo at intel.com
Wed Jun 6 07:37:42 CEST 2018


From: yufengmx <yufengx.mo at intel.com>


This test plan is for pmd stacked bonded feature.

Allow bonded devices to be stacked to allow two or more bonded devices to be
bonded into one master bonded device

Signed-off-by: yufengmx <yufengx.mo at intel.com>
---
 test_plans/pmd_stacked_bonded_test_plan.rst | 340 ++++++++++++++++++++++++++++
 1 file changed, 340 insertions(+)
 create mode 100644 test_plans/pmd_stacked_bonded_test_plan.rst

diff --git a/test_plans/pmd_stacked_bonded_test_plan.rst b/test_plans/pmd_stacked_bonded_test_plan.rst
new file mode 100644
index 0000000..554af1a
--- /dev/null
+++ b/test_plans/pmd_stacked_bonded_test_plan.rst
@@ -0,0 +1,340 @@
+.. Copyright (c) <2017>, Intel Corporation
+   All rights reserved.
+
+   Redistribution and use in source and binary forms, with or without
+   modification, are permitted provided that the following conditions
+   are met:
+
+   - Redistributions of source code must retain the above copyright
+     notice, this list of conditions and the following disclaimer.
+
+   - Redistributions in binary form must reproduce the above copyright
+     notice, this list of conditions and the following disclaimer in
+     the documentation and/or other materials provided with the
+     distribution.
+
+   - Neither the name of Intel Corporation nor the names of its
+     contributors may be used to endorse or promote products derived
+     from this software without specific prior written permission.
+
+   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+   OF THE POSSIBILITY OF SUCH DAMAGE.
+
+stacked Bonded
+==============
+
+The demand arises from a discussion with a prospective customer for a 100G NIC 
+based on RRC. The customer already uses Mellanox 100G NICs. Mellanox 100G NICs 
+support a proper x16 PCIe interface so the host sees a single netdev and that 
+netdev corresponds directly to the 100G Ethernet port. They indicated that in 
+their current system they bond multiple 100G NICs together, using DPDK bonding 
+API in their application. They are interested in looking at an alternatve source
+for the 100G NIC and are in conversation with Silicom who are shipping a 100G 
+RRC based NIC (something like Boulder Rapids). The issue they have with RRC NIC 
+is that the NIC presents as two PCIe interfaces (netdevs) instead of one. If the 
+DPDK bonding could operate at 1st level on the two RRC netdevs to present a 
+single netdev could the application then bond multiple of these bonded 
+interfaces to implement NIC bonding.
+
+Prerequisites for Bonding
+=========================
+
+*. hardware configuration
+  all link ports of tester/dut should be the same data rate and support full-duplex.
+  Slave down testing need four ports at least, other testing items can run on two
+  ports. 
+
+  testing hardware configuration
+  ==============================
+  NIC/DUT/TESTER ports requriements.
+  - DUT:  4 ports of nic.
+  - TESTER:  4 ports of nic.
+
+  Connections ports between TESTER and DUT
+       TESTER                                DUT
+               physical link             logical link
+     .--------.            .------------------------------------------.
+     | portA0 | <--------> | portB0 <---> .--------.                  |
+     |        |            |              | bond 0 | <-----> .------. |
+     | portA1 | <--------> | portB1 <---> '--------'         |      | |
+     |        |            |                                 |bond 2| |
+     | portA2 | <--------> | portB2 <---> .--------.         |      | |
+     |        |            |              | bond 1 | <-----> '------' |
+     | portA3 | <--------> | portB3 <---> '--------'                  |
+     '--------'            '------------------------------------------'
+
+Test Case : basic behavior
+==========================
+allow a bonded device to be added to another bonded device, which is
+supported by 
+ - balance-rr 0
+ - active-backup 1
+ - balance-xor 2
+ - broadcast 3
+ - balance-tlb 5
+
+There's two limitations to create master bonding:
+
+ - Total depth of nesting is limited to two levels
+ - 802.3ad mode is not supported if one or more slaves is a bond device
+
+*. add the same device twice to check exceptional process is ok
+*. add the second level bonded device to another new bonded device
+   check exceptional message occurs
+*. stacked bonded is forbidden on mode 4
+*. master bonding/each slaves queue configuration is the same
+
+steps:
+*. bind two ports
+
+    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci address 2>
+
+*. boot up testpmd
+
+    ./testpmd -c 0x6 -n 4  -- -i --tx-offloads=0xXXXX
+
+*. create first bonded device and add one slave, check bond 2 config status
+
+    testpmd> port stop all
+    testpmd> create bonded device <mode> 0
+    testpmd> add bonding slave 0 2
+    testpmd> show bonding config 2
+
+*. create second bonded device and add one slave, check bond 3 config status
+
+    testpmd> create bonded device <mode> 0
+    testpmd> add bonding slave 1 3
+    testpmd> show bonding config 3
+
+*. create third bonded device and add first/second bonded port as its' slaves.
+   check if slave is added successful. stacked bonded is forbidden by mode 4,
+    mode 4 will fail to add a bonded device as its' slave
+
+    testpmd> create bonded device <mode> 0
+    testpmd> add bonding slave 2 4
+    testpmd> add bonding slave 3 4
+    testpmd> show bonding config 4
+
+*. start all bonded device ports
+
+    testpmd> port start all
+    testpmd> start
+
+*. close testpmd
+
+    testpmd> stop
+    testpmd> quit
+
+Test Case : active-backup stacked bonded rx traffic
+===================================================
+set dut/testpmd on stacked bonded status. send tcp packet by scapy and check 
+packet statistics.
+
+steps:
+*. bind two ports
+
+    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci address 2>
+
+*. boot up testpmd
+
+    ./testpmd -c 0x6 -n 4  -- -i --tx-offloads=0xXXXX
+
+*. create first bonded device and add one port as slave
+
+    testpmd> port stop all
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 0 2
+
+*. create second bonded device and add one ports as slave
+
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 1 3
+
+*. create third bonded device and add first/second bonded port as its' slaves.
+   check if slave is added successful.
+
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 2 4
+    testpmd> add bonding slave 3 4
+    testpmd> show bonding config 4
+
+*. start all bonded device ports
+
+    testpmd> port start all
+    testpmd> start
+
+*. send 100 packets to port 0 and port 1
+
+*. check first/seconde bonded device should receive 100 packets, third bonded 
+   device should receive 200 packets
+
+*. close testpmd
+
+    testpmd> stop
+    testpmd> quit
+
+Test Case : active-backup stacked bonded rx traffic with slave down
+===================================================================
+set dut/testpmd on stacked bonded status. set one slave of 1st level bonded 
+device to down status.send tcp packet by scapy and check packet statistics. 
+
+steps:
+*. bind four ports
+
+    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci address 2> \
+                                               <pci address 3> <pci address 4>
+
+*. boot up testpmd
+
+    ./testpmd -c 0x6 -n 4  -- -i --tx-offloads=0xXXXX
+
+*. create first bonded device and add two ports as slaves, set port 1 down
+
+    testpmd> port stop all
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 0 4
+    testpmd> add bonding slave 1 4
+
+*. create second bonded device and add two ports as slaves, set port 3 down
+
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 2 5
+    testpmd> add bonding slave 3 5
+
+*. create third bonded device and add first/second bonded port as its' slaves.
+   check if slave is added successful.
+
+    testpmd> create bonded device 1 0
+    testpmd> add bonding slave 4 6
+    testpmd> add bonding slave 5 6
+    testpmd> show bonding config 6
+
+*. start all bonded device ports
+
+    testpmd> port start all
+    testpmd> start
+
+*. send 100 packets to port 0 and port 2
+
+*. check first/seconde bonded device should receive 100 packets, third bonded 
+   device should receive 200 packets
+
+*. close testpmd
+
+    testpmd> stop
+    testpmd> quit
+
+Test Case : balance-xor stacked bonded rx traffic
+=================================================
+set dut/testpmd on stacked bonded status. send tcp packet by scapy and check 
+packet statistics.
+
+steps:
+*. bind two ports
+
+    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci address 2>
+
+*. boot up testpmd, stop all port
+
+    ./testpmd -c 0x6 -n 4  -- -i --tx-offloads=0xXXXX
+    testpmd> port stop all
+
+*. create first bonded device and add one port as slaves
+
+    testpmd> create bonded device 2 0
+    testpmd> add bonding slave 0 2
+
+*. create second bonded device and add one port as slaves
+
+    testpmd> create bonded device 2 0
+    testpmd> add bonding slave 1 3
+
+*. create third bonded device and add first/second bonded port as its' slaves.
+   check if slave is added successful. 
+
+    testpmd> create bonded device 2 0
+    testpmd> add bonding slave 2 4
+    testpmd> add bonding slave 3 4
+    testpmd> show bonding config 4
+
+*. start all bonded device ports
+
+    testpmd> port start all
+    testpmd> start
+
+*. send 100 packets to port 0 and port 1
+
+*. check first/seconde bonded device should receive 100 packets, third bonded 
+   device should receive 200 packets
+
+    testpmd> show port stats all
+
+*. close testpmd
+
+    testpmd> stop
+    testpmd> quit
+
+Test Case : balance-xor stacked bonded rx traffic with slave down
+=================================================================
+set dut/testpmd on stacked bonded status. set one slave of 1st level bonded 
+device to down status.send tcp packet by scapy and check packet statistics.
+
+steps:
+*. bind four ports
+
+    ./usertools/dpdk-devbind.py --bind=igb_uio <pci address 1> <pci address 2> \
+                                               <pci address 3> <pci address 4>
+
+*. boot up testpmd
+
+    ./testpmd -c 0x6 -n 4  -- -i --tx-offloads=0xXXXX
+
+*. create first bonded device and add two ports as slaves, set port 1 down
+
+    testpmd> port stop all
+    testpmd> create bonded device 2 0
+    testpmd> add bonding slave 0 4
+    testpmd> add bonding slave 1 4
+    testpmd> port stop 1
+
+*. create second bonded device and add two ports as slaves, set port 3 down
+
+    testpmd> create bonded device 2 0
+    testpmd> add bonding slave 2 5
+    testpmd> add bonding slave 3 5
+    testpmd> port stop 3
+
+*. create third bonded device and add first/second bonded port as its' slaves.
+   check if slave is added successful. 
+
+    testpmd> create bonded device 2 0
+    testpmd> add bonding slave 4 6
+    testpmd> add bonding slave 5 6
+    testpmd> show bonding config 6
+
+*. start all bonded device ports
+
+    testpmd> port start all
+    testpmd> start
+
+*. send 100 packets to port 0 and port 2
+
+*. check first/seconde bonded device should receive 100 packets, third bonded 
+   device should receive 200 packets
+
+    testpmd> show port stats all
+
+*. close testpmd
+
+    testpmd> stop
+    testpmd> quit
-- 
1.9.3



More information about the dts mailing list