[dpdk-dev] Performance impact with QoS

satish nsatishbabu at gmail.com
Tue Nov 11 01:24:55 CET 2014


Hi,
I need comments on performance impact with DPDK-QoS.

We are working on developing a application based on DPDK.
Our application supports IPv4 forwarding with and without QoS.

Without QOS, we are achieving almost full wire rate (bi-directional
traffic) with 128, 256 and 512 byte packets.
But when we enabled QoS, performance dropped to half for 128 and 256 byte
packets.
For 512 byte packet, we didn't observe any drop even after enabling QoS
(Achieving full wire rate).
Traffic used in both the cases is same. ( One stream with Qos match to
first queue in traffic class 0)

In our application, we are using memory buffer pools to receive the packet
bursts (Ring buffer is not used).
Same buffer is used during packet processing and TX (enqueue and dequeue).
All above handled on the same core.

For normal forwarding(without QoS), we are using rte_eth_tx_burst for TX.

For forwarding with QoS, using rte_sched_port_pkt_write(),
rte_sched_port_enqueue () and rte_sched_port_dequeue ()
before rte_eth_tx_burst ().

We understood that performance dip in case of 128 and 256 byte packet is
bacause
of processing more number of packets compared to 512 byte packet.

Can some comment on performance dip in my case with QOS enabled?
[1] can this be because of inefficient use of RTE calls for QoS?
[2] Is it the poor buffer management?
[3] any other comments?

To achieve good performance in QoS case, is it must to use worker thread
(running on different core) with ring buffer?

Please provide your comments.

Thanks in advance.

Regards,
Satish Babu


More information about the dev mailing list