[dpdk-users] Problem With Multi Queue

Hamed Zaghaghi hamed.zaghaghi at gmail.com
Wed Jan 27 12:04:33 CET 2016


Hi,

I'm implementing an offline packet feature extraction using DPDK. There is
a NIC as described bellow:

0000:0b:00.0 '82599EB 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
unused=ixgbe

10-Gigabit (about 2.5Milion packets) data is receiving per second and my
application extract features from each packet. Because of time consuming
nature of feature extraction this traffic can't be handled by one core.

So the problem arise when I distribute traffic between 4 queues and assign
one core to each queue to handle the traffic. But results show that
multi-queue does not help.

Case 1:
Number of cores and queues: 1
Traffic: 10-Gigabit (2.2M packets)
Dropped packets: almost 25% of traffic

Case 2:
Number of cores and queues: 4
Traffic: 10-Gigabit (2.2M packets)
Dropped packets: almost 25% of traffic

I'm using ETH_MQ_RX_RSS for mq_mode of rxmode and each core receives
packets and processes them but output of rte_eth_xstats_get shows something
weird.

Totoal packets: 10,000,000
 - rx_good_packets, 7,831,965
 - rx_good_bytes, 3,999,612,189
 - rx_errors, 2,168,035
 - rx_mbuf_allocation_errors, 0
 - rx_q0_packets, 7,831,965
 - rx_q0_bytes, 3,999,612,189
 - rx_q0_errors, 0
 - rx_q1_packets, 0
 - rx_q1_bytes, 0
 - rx_q1_errors, 0
 - rx_q2_packets, 0
 - rx_q2_bytes, 0
 - rx_q2_errors, 0
 - rx_q3_packets, 0
 - rx_q3_bytes, 0
 - rx_q3_errors, 0

Does this behaviour is normal? Do I configure ports incorrectly?

Thanks in advance for your attention
Best regards,
-- Hamed Zaghaghi


More information about the users mailing list