Bug 4 - Segfault while running txonly mode with 4 16B SGEs packets
Summary: Segfault while running txonly mode with 4 16B SGEs packets
Status: RESOLVED WORKSFORME
Alias: None
Product: DPDK
Classification: Unclassified
Component: testpmd (show other bugs)
Version: 17.08
Hardware: x86 Linux
: Normal major
Target Milestone: ---
Assignee: Saleh AlSouqi
URL:
Depends on:
Blocks:
 
Reported: 2017-11-07 09:42 CET by Saleh AlSouqi
Modified: 2021-02-20 07:51 CET (History)
3 users (show)



Attachments

Description Saleh AlSouqi 2017-11-07 09:42:11 CET
Testpmd crashes when any segment size is <= 18 bytes

testpmd command:
./x86_64-native-linuxapp-gcc/build/app/test-pmd/testpmd c 0x003f -n 4 --no-pci --vdev net_ring0 --vdev net_ring1 -- --burst=64 --mbcache=512 --portmask 0x3 -i --txd=1024 --rxd=256 --rxq=1 --txq=1 --coremask 0x001e --disable-crc-strip --forward-mode=txonly --txpkts=16,16,16,16 -a

Trace:
#0 0x0000000000475698 in pkt_burst_transmit ()
#1 0x000000000045713a in run_pkt_fwd_on_lcore ()
#2 0x0000000000457244 in start_pkt_forward_on_core ()
#3 0x00000000004b9994 in eal_thread_loop ()
#4 0x00007ffff7093e25 in start_thread () from /lib64/libpthread.so.0
#5 0x00007ffff6dc134d in clone () from /lib64/libc.so.6
Comment 1 Manjunath Govind 2018-01-03 10:45:34 CET
do we need to specify --rxq=1 argument to testpmd when tests was run for tx-only mode
Comment 2 Ajit Khaparde 2018-08-02 01:14:07 CEST
Saleh,
Is this still an issue?

Thanks
Ajit
Comment 3 Zhang, RobinX 2021-01-27 09:30:33 CET
Hi Saleh, testpmd works well with your command on latest codebase, could you please try again from your side?

Here's my test environment and test result:
------------------------------------------
<DPDK Version>:
6a2cf58a04 (HEAD -> main, origin/main, origin/HEAD)

<Kernel Version>:
Linux intel-npg-odc-srv03 5.4.0-62-generic #70-Ubuntu SMP Tue Jan 12 12:45:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

<NIC Information>:
driver: i40e
version: 2.13.10
firmware-version: 8.10 0x800093e3 1.2829.0
expansion-rom-version:
bus-info: 0000:05:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
------------------------------------------

root@intel-npg-odc-srv03:~/code/dpdk# ./build/app/dpdk-testpmd -c 0x003f -n 4 --no-pci --vdev net_ring0 --vdev net_ring1 -- --burst=64 --mbcache=512 --portmask 0x3 -i --txd=1024 --rxd=256 --rxq=1 --txq=1 --coremask 0x001e --forward-mode=txonly --txpkts=16,16,16,16 -a
EAL: Detected 88 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
Interactive-mode selected
previous number of forwarding cores 1 - changed to number of configured cores 4
Set txonly packet forwarding mode
Auto-start selected
testpmd: create a new mbuf pool <mb_pool_0>: n=229376, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 00:00:00:00:00:00
Configuring Port 1 (socket 0)
Port 1: 00:00:00:00:00:00
Checking link statuses...
Done
Start automatic packet forwarding
txonly packet forwarding - ports=2 - cores=2 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
Logical Core 2 (socket 0) forwards packets on 1 streams:
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  txonly packet forwarding packets/burst=64
  packet len=64 - nb packet segments=4
  nb forwarding cores=4 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=256 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=1024 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=256 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=1024 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 1023           TX-dropped: 55122561      TX-total: 55123584
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 1023           TX-dropped: 55305985      TX-total: 55307008
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 2046           TX-dropped: 110428546     TX-total: 110430592
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.
testpmd>
Comment 4 Zhang, RobinX 2021-02-20 07:51:16 CET
Cannot be reproduced, set to RESOLVED WORKSFORME.

Note You need to log in before you can comment on or make changes to this bug.