Bug 1228 - [dpdk-21.11.4]pvp_qemu_multi_paths_port_restart:test_perf_pvp_qemu_normal_mac: performance drop about 23.5% when send small packets
Summary: [dpdk-21.11.4]pvp_qemu_multi_paths_port_restart:test_perf_pvp_qemu_normal_mac...
Status: UNCONFIRMED
Alias: None
Product: DPDK
Classification: Unclassified
Component: vhost/virtio (show other bugs)
Version: 21.11
Hardware: All All
: Normal normal
Target Milestone: ---
Assignee: dev
URL:
Depends on:
Blocks:
 
Reported: 2023-05-11 10:11 CEST by lingwei
Modified: 2023-05-11 10:18 CEST (History)
0 users



Attachments

Description lingwei 2023-05-11 10:11:39 CEST
[Environment]
DPDK version: 
Use make showversion or for a non-released version: git remote -v && git show-ref --heads
 21.11.4-rc1
Other software versions: QEMU-7.0.0.
OS: Ubuntu 22.04.1 LTS/Linux 5.15.45-051545-generic
Compiler: gcc version 11.3.0 (Ubuntu 11.3.0-1ubuntu1~22.04)
Hardware platform: Intel(R) Xeon(R) Platinum 8280M CPU @ 2.70GHz
NIC hardware: Intel Ethernet Controller XL710 for 40GbE QSFP+ 1583
NIC firmware: i40e-2.22.18/9.20 0x8000d893 1.3353.0

[Test Setup]
Steps to reproduce

List the steps to reproduce the issue.

1.Bind 1 NIC port to vfio-pci

dpdk-devbind.py --force --bind=vfio-pci 0000:18:00.0

2.View the numa node of the NIC port

root@dut220:~# cat /sys/bus/pci/devices/0000\:18\:00.0/numa_node
0

3.View the lcore of the server

root@dut220:~# /root/dpdk/usertools/cpu_layout.py
======================================================================
Core and Socket Information (as reported by '/sys/devices/system/cpu')
======================================================================cores =  [0, 1, 2, 3, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 24, 25, 26, 27, 28, 29, 30]
sockets =  [0, 1]        
Socket 0          Socket 1
        --------          --------
Core 0  [0, 56]           [28, 84]
Core 1  [1, 57]           [29, 85]
Core 2  [2, 58]           [30, 86]
Core 3  [3, 59]           [31, 87]
Core 4  [4, 60]           [32, 88]
Core 5  [5, 61]           [33, 89]
Core 6  [6, 62]           [34, 90]
Core 8  [7, 63]           [35, 91]
Core 9  [8, 64]           [36, 92]
Core 10 [9, 65]           [37, 93]
Core 11 [10, 66]          [38, 94]
Core 12 [11, 67]          [39, 95]
Core 13 [12, 68]          [40, 96]
Core 14 [13, 69]          [41, 97]
Core 16 [14, 70]          [42, 98]
Core 17 [15, 71]          [43, 99]
Core 18 [16, 72]          [44, 100]
Core 19 [17, 73]          [45, 101]
Core 20 [18, 74]          [46, 102]
Core 21 [19, 75]          [47, 103]
Core 22 [20, 76]          [48, 104]
Core 24 [21, 77]          [49, 105]
Core 25 [22, 78]          [50, 106]
Core 26 [23, 79]          [51, 107]
Core 27 [24, 80]          [52, 108]
Core 28 [25, 81]          [53, 109]
Core 29 [26, 82]          [54, 110]
Core 30 [27, 83]          [55, 111]


4.Start vhost-user with the lcores same numa with the NIC port(eg: on scoket 0):

x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 18,19 -n 4 -a 0000:18:00.0  --file-prefix=vhost_2352949_20230407162534  --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024

testpmd>set fwd mac
testpmd>start

5.Start VM0 with QEMU-7.0.0 with the lcores different numa with the NIC port(eg: on scoket 1):  

taskset -c 30,31,32,33,34,35,36,37 /home/QEMU/qemu-7.0.0/bin/qemu-system-x86_64  -name vm0 -enable-kvm -pidfile /tmp/.vm0.pid -daemonize -monitor unix:/tmp/vm0_monitor.sock,server,nowait -netdev user,id=nttsip1,hostfwd=tcp:10.239.252.220:6000-:22 -device e1000,netdev=nttsip1  -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 -cpu host -smp 8 -m 16384 -object memory-backend-file,id=mem,size=16384M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 -vnc :4 -drive file=/home/image/ubuntu2004.img

6.SSH VM0 and bind virtio-net to vfio-pci:

echo 0 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
modprobe vfio-pci
echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
dpdk-devbind.py --force --bind=vfio-pci 0000:00:04.0

7.Start testpmd in VM0:

x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -a 0000:00:04.0,vectorized=1 -- -i --nb-cores=1 --txd=1024 --rxd=1024

testpmd>set fwd mac
testpmd>start

6.Use pktgen to send packets, and record the throughput.

Show the output from the previous commands.

+--------------+----------------------+------------------+------------+----------------+
| FrameSize(B) |         Mode         | Throughput(Mpps) | % linerate |     Cycle      |
+==============+======================+==================+============+================+
| 64           | virtio0.95 vector_rx | 4.314            | 7.247      | Before Restart |
+--------------+----------------------+------------------+------------+----------------+
Expected Result

Explain what is the expected result in text or as an example output:

+--------------+----------------------+------------------+------------+----------------+
| FrameSize(B) |         Mode         | Throughput(Mpps) | % linerate |     Cycle      |
+==============+======================+==================+============+================+
| 64           | virtio0.95 vector_rx | 5.642            | 9.478      | Before Restart |
+--------------+----------------------+------------------+------------+----------------+
Regression

Is this issue a regression: (Y/N) Y

Version the regression was introduced: Specify git id if known.

commit c41493361c87e730459ead9311c68528eb0874aa (HEAD)
Author: Boleslav Stankevich <boleslav.stankevich@oktetlabs.ru>
Date:   Fri Mar 3 14:19:29 2023 +0300

    net/virtio: deduce IP length for TSO checksum

    [ upstream commit d069c80a5d8c0a05033932421851cdb7159de0df ]

    The length of TSO payload could not fit into 16 bits provided by the
    IPv4 total length and IPv6 payload length fields. Thus, deduce it
    from the length of the packet.

    Fixes: 696573046e9e ("net/virtio: support TSO")

    Signed-off-by: Boleslav Stankevich <boleslav.stankevich@oktetlabs.ru>
    Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
    Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Stack Trace or Log
# Add a long stack trace or log output in a noformat block like this.

Note You need to log in before you can comment on or make changes to this bug.