[dpdk-dev] ixgbe vector mode not working.

Liang, Cunming cunming.liang at intel.com
Wed Feb 25 05:55:09 CET 2015


Hi Stephen,

I tried on the latest mater branch with testpmd.
2 rxq and 2 txq as below, vector pmd on both rx and tx. I can't reproduced it.
I checked your log, on tx side, it looks the tx vector haven't enabled. (it shows vpmd on rx, spmd on tx).
Would you help to share the below params in your app ?
	RX desc=128 - RX free threshold=32
	TX desc=512 - TX free threshold=32
	TX RS bit threshold=32 - TXQ flags=0xf01
As in your case which using 2 rxq and 1 txq, would you explain the traffic flow between them.
One thread polling packets from each rxq and send to the specified txq ?

./x86_64-native-linuxapp-gcc/app/testpmd -c 0xff00 -n 4 -- -i --coremask=f000 --txfreet=32 --rxfreet=32 --txqflags=0xf01 --txrst=32 --rxq=2 --txq=2 --numa
 [...]
Configuring Port 0 (socket 1)
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f99cace9ac0 hw_ring=0x7f99c9c3f480 dma_addr=0x1fdd83f480
PMD: set_tx_function(): Using simple tx code path
PMD: set_tx_function(): Vector tx enabled.
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f99cace7980 hw_ring=0x7f99c9c4f480 dma_addr=0x1fdd84f480
PMD: set_tx_function(): Using simple tx code path
PMD: set_tx_function(): Vector tx enabled.
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f99cace7100 hw_ring=0x7f99c9c5f480 dma_addr=0x1fdd85f480
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f99cace6880 hw_ring=0x7f99c9c6f500 dma_addr=0x1fdd86f500
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=1.
PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
Port 0: 90:E2:BA:30:A0:75
Configuring Port 1 (socket 1)
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f99cace4540 hw_ring=0x7f99c9c7f580 dma_addr=0x1fdd87f580
PMD: set_tx_function(): Using simple tx code path
PMD: set_tx_function(): Vector tx enabled.
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f99cace2400 hw_ring=0x7f99c9c8f580 dma_addr=0x1fdd88f580
PMD: set_tx_function(): Using simple tx code path
PMD: set_tx_function(): Vector tx enabled.
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f99cace1b80 hw_ring=0x7f99c9c9f580 dma_addr=0x1fdd89f580
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=0.
PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f99cace1300 hw_ring=0x7f99c9caf600 dma_addr=0x1fdd8af600
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=1.
PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
Port 1: 90:E2:BA:06:90:59
Checking link statuses...
Port 0 Link Up - speed 10000 Mbps - full-duplex
Port 1 Link Up - speed 10000 Mbps - full-duplex
Done
testpmd> show config rxtx
  io packet forwarding - CRC stripping disabled - packets/burst=32
  nb forwarding cores=4 - nb forwarding ports=2
  RX queues=2 - RX desc=128 - RX free threshold=32
  RX threshold registers: pthresh=8 hthresh=8 wthresh=0
  TX queues=2 - TX desc=512 - TX free threshold=32
  TX threshold registers: pthresh=32 hthresh=0 wthresh=0
  TX RS bit threshold=32 - TXQ flags=0xf01

-Cunming

> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen at networkplumber.org]
> Sent: Wednesday, February 25, 2015 8:16 AM
> To: Nemeth, Balazs; Richardson, Bruce; Liang, Cunming; Neil Horman
> Cc: dev at dpdk.org
> Subject: ixgbe vector mode not working.
> 
> The ixgbe driver (from 1.8 or 2.0) works fine in normal (non-vectored) mode.
> But when vector mode is enabled, it gets a few packets through then hangs.
> We use 2 Rx queues and 1 Tx queue per interface.
> 
> Devices:
> 01:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+
> Network Connection (rev 01)
> 02:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-
> AT2 (rev 01)
> 
> Log:
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5
> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
> EAL:   probe driver: 8086:1528 rte_ixgbe_pmd
> PMD: eth_ixgbe_dev_init(): MAC: 4, PHY: 3
> PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x1528
> [    0.000043] DATAPLANE: Port 0 rte_ixgbe_pmd on socket 0
> [    0.000053] DATAPLANE: Port 1 rte_ixgbe_pmd on socket 0
> [    0.031638] PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fc5ac6a1b40
> hw_ring=0x7fc5ab548300 dma_addr=0x67348300
> [    0.031647] PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc
> Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0,
> queue=0.
> [    0.031653] PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please
> make sure RX burst size no less than 32.
> [    0.031672] PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fc5ac6999c0
> hw_ring=0x7fc5ab558380 dma_addr=0x67358380
> [    0.031680] PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc
> Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0,
> queue=1.
> [    0.031695] PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please
> make sure RX burst size no less than 32.
> [    0.031708] PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7fc5ac697880
> hw_ring=0x7fc5ab568400 dma_addr=0x67368400
> [    0.035745] PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fc5ac684e00
> hw_ring=0x7fc5ab580480 dma_addr=0x67380480
> [    0.035754] PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc
> Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1,
> queue=0.
> [    0.035761] PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please
> make sure RX burst size no less than 32.
> [    0.035783] PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fc5ac67cc80
> hw_ring=0x7fc5ab590500 dma_addr=0x67390500
> [    0.035792] PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc
> Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1,
> queue=1.
> [    0.035798] PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please
> make sure RX burst size no less than 32.
> [    0.035810] PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7fc5ac67ab40
> hw_ring=0x7fc5ab5a0580 dma_addr=0x673a0580
> [    5.886027] PMD: ixgbe_dev_link_status_print(): Port 0: Link Down
> [    5.886064] PMD: ixgbe_dev_link_status_print(): Port 0: Link Up - speed 10000
> Mbps - full-duplex
> [    6.234150] PMD: ixgbe_dev_link_status_print(): Port 1: Link Up - speed 0 Mbps
> - half-duplex
> [    6.234196] PMD: ixgbe_dev_link_status_print(): Port 1: Link Down
> [    6.886098] PMD: ixgbe_dev_link_status_print(): Port 0: Link Up - speed 10000
> Mbps - full-duplex
> [   10.234776] PMD: ixgbe_dev_link_status_print(): Port 1: Link Down
> [   11.818676] PMD: ixgbe_dev_link_status_print(): Port 1: Link Up - speed 10000
> Mbps - full-duplex
> [   12.818758] PMD: ixgbe_dev_link_status_print(): Port 1: Link Up - speed 10000
> Mbps - full-duplex
> 
> Application trace shows lots of packets, then everything stops.



More information about the dev mailing list