[dpdk-users] Performance Problem of DPDK pkt-gen

Royce Niu royceniu at gmail.com
Mon Mar 7 18:14:10 CET 2016


You are so cool.

Thanks!

The problem is how to disable pause frame now.

I searched and tried

sudo ethtool -A eth4 autoneg off tx off rx off
sudo ethtool -A eth4 autoneg off tx off rx off
sudo ethtool -A eth5 autoneg off tx off rx off
sudo ethtool -A eth5 autoneg off tx off rx off
sudo ethtool -K eth4 tso off gro off gso off tx off rx off
sudo ethtool -K eth5 tso off gro off gso off tx off rx off

But, it is not working... I think it is not working in DPDK.


On Tue, Mar 8, 2016 at 12:41 AM, Wiles, Keith <keith.wiles at intel.com> wrote:

> From:  Royce Niu <royceniu at gmail.com>
> Date:  Monday, March 7, 2016 at 10:35 AM
> To:  Keith Wiles <keith.wiles at intel.com>
> Cc:  Royce Niu <royceniu at gmail.com>, "users at dpdk.org" <users at dpdk.org>
> Subject:  Re: [dpdk-users] Performance Problem of DPDK pkt-gen
>
>
> >Dear Keith,
> >
> >
> >I started both PCs pkt-gen and sending 64K packets at the same time:
> >
> >PC1        PC2
> >NIC0 -> NIC0 ( 12Mpps)
> >
> >NIC1 <- NIC1 ( 12Mpps )
> >
> >
> >Although it is less than 14Mpps,  but it is ok?
> >
> >So, the problem is the pause packet?
> >
> >
> >
>
> I am guessing that is that problem, maybe someone else has a better idea.
> I assume the problem is with L2FWD code having to move packets between numa
> zones or across the QPI bus. In Pktgen each port has its own set of mbufs
> and does not need to interact between sockets.
> >
> >
> >On Tue, Mar 8, 2016 at 12:25 AM, Wiles, Keith
> ><keith.wiles at intel.com> wrote:
> >
> >From:  Royce Niu <royceniu at gmail.com>
> >Date:  Monday, March 7, 2016 at 10:01 AM
> >To:  Keith Wiles <keith.wiles at intel.com>
> >Cc:  Royce Niu <royceniu at gmail.com>, "users at dpdk.org" <users at dpdk.org>
> >Subject:  Re: [dpdk-users] Performance Problem of DPDK pkt-gen
> >
> >
> >>Dear Keith,
> >>
> >>I am doing the measurement works. The two PCs are same in
> software/physical configuration with two 10Gb/s link.
> >>
> >>The L2FWD actually is in a virtual machine in L2FWD pc. I don't mind
> packet drops if L2FWD in VM on L2FWD do its best.
> >>
> >>Is there an solution cancel rate limiting in linux/dpdk? So, I can know
> how many packets in lost in 14.4Mpps environments.
> >
> >Sorry, I am not sure about how to turn off pause frames. But you should
> try running Pktgen on both machines to verify the problem.
> >>
> >>
> >>
> >>
> >>
> >>On Mon, Mar 7, 2016 at 11:49 PM, Wiles, Keith
> >><keith.wiles at intel.com> wrote:
> >>
> >>From: Royce Niu <royceniu at gmail.com>
> >>Date: Monday, March 7, 2016 at 9:41 AM
> >>To: Keith Wiles <keith.wiles at intel.com>
> >>Cc: Royce Niu <royceniu at gmail.com>, "users at dpdk.org" <users at dpdk.org>
> >>Subject: Re: [dpdk-users] Performance Problem of DPDK pkt-gen
> >>
> >>
> >>
> >>Yes.
> >>
> >>The problem is the sending rate is not 14Mpps when L2FWD is working.
> >>
> >>When L2FWD is working, the sending rate is about 4Mpps, instead of
> 14Mpps. When I close the L2FWD pc, the sending rate recovers to 14Mpps...
> >>
> >>I think there is something wrong with my Pktgen PC. Could you help me
> check my commands? Or is there anything wrong?
> >>
> >>
> >>
> >>
> >>
> >>
> >>I do not think Pktgen has a problem or the PC on which it runs. I expect
> the problem is the second PC is not able to keep up with the RX rate and is
> sending pause packets to the TX machine. The pause packets will reduce the
> TX rate on the Pktgen PC.
> >>
> >>Try running Pktgen on the L2FWD PC and see if the rate drops. If the
> rate does not drop then the second PC is sending pause frame back to the
> first PC to do rate limiting on the TX side.
> >>
> >>
> >>
> >>
> >>
> >>
> >>On Mon, Mar 7, 2016 at 11:35 PM, Wiles, Keith
> >><keith.wiles at intel.com> wrote:
> >>
> >>From: Royce Niu <royceniu at gmail.com>
> >>Date: Monday, March 7, 2016 at 9:30 AM
> >>To: Keith Wiles <keith.wiles at intel.com>
> >>Cc: Royce Niu <royceniu at gmail.com>, "users at dpdk.org" <users at dpdk.org>
> >>Subject: Re: [dpdk-users] Performance Problem of DPDK pkt-gen
> >>
> >>
> >>
> >>Hi, Keith
> >>
> >>Maybe, since I didn't configure the CPU affinity in the L2FWD pc on
> purpose so far.
> >>
> >>But, my question is the first PC have a poor sending rate when L2FWD is
> working in second PC.
> >>
> >>You mean the problem is related to L2FWD?
> >>
> >>
> >>
> >>
> >>
> >>
> >>The sending rate of the Pktgen PC should be constant, but the forwarding
> rate of the second PC maybe the problem because sending packets received on
> one socket and then being send by another socket is a problem as the QPI
> bus between sockets is not as
> >> fast.
> >>
> >>
> >>
> >>On Mon, Mar 7, 2016 at 11:26 PM, Wiles, Keith
> >><keith.wiles at intel.com> wrote:
> >>
> >>>Dear all,
> >>>
> >>>I am using an server with 4 cpus (4 x 8 core CPUs with HT) and NICs
> (X520).
> >>>
> >>>When I use pkt-gen on NIC1 or NIC2, the speed of generating 64Byte is
> >>>14Mpps.
> >>>
> >>>If I generating both on NIC1 and NIC2, the speed of  generating 64Byte
> on
> >>>both are more than  13Mpps.
> >>>
> >>>However, I use same configuration PC (with DPDK L2FWD) to bridge NIC1
> and
> >>>NIC2, so I can generate packet on NIC1 and receive these packets on
> NIC2 in
> >>>pkt-gen. The speed of generating is decreased to 4Mpps and the receive
> rate
> >>>is 3Mpps.
> >>
> >>I am not sure how you configured the second PC for L2FWD, but I suspect
> the L2FWD is having to receive packets on socket 0 and send the packets on
> socket 1, this means the QPI bus gets involved here. Is this the case?
> >>
> >>>
> >>>
> >>>I want to know why generating speed is slower than the situation without
> >>>the bridge of NIC1 and NIC2?  how to solve these problem?
> >>>
> >>>The detailed information is as following.
> >>>
> >>>sudo sysctl vm.nr_hugepages=4096
> >>>echo 1024 | sudo tee
> >>>/sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> >>>echo 1024 | sudo tee
> >>>/sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
> >>>echo 1024 | sudo tee
> >>>/sys/devices/system/node/node2/hugepages/hugepages-2048kB/nr_hugepages
> >>>echo 1024 | sudo tee
> >>>/sys/devices/system/node/node3/hugepages/hugepages-2048kB/nr_hugepages
> >>>
> >>>
> >>>sudo mkdir -p /dev/hugepages
> >>>sudo mount -t hugetlbfs nodev /dev/hugepages
> >>>
> >>>
> >>>sudo dpdk-2.2.0/tools/dpdk_nic_bind.py --status
> >>>sudo modprobe uio
> >>>sudo insmod dpdk-2.2.0/build/kmod/igb_uio.ko
> >>>
> >>>sudo dpdk-2.2.0/tools/dpdk_nic_bind.py -b igb_uio 04:00.0 04:00.1
> >>>sudo dpdk-2.2.0/tools/dpdk_nic_bind.py --status
> >>>
> >>>cd pktgen-2.9.12/
> >>>
> >>>sudo app/build/pktgen -c 0x1f -n 3 --proc-type auto --socket-mem
> >>>128,128,128,128 -- -P -m "[1:3].0, [2:4].1" -f test/set_seq.pkt
> >>>
> >>>I tried to change -m, but, sometime there is no packet generated by
> >>>pkt-gen.
> >>>
> >>>
> >>>The core map is :
> >>>
> >>>EAL: Detected lcore 0 as core 0 on socket 0
> >>>EAL: Detected lcore 1 as core 0 on socket 1
> >>>EAL: Detected lcore 2 as core 0 on socket 2
> >>>EAL: Detected lcore 3 as core 0 on socket 3
> >>>EAL: Detected lcore 4 as core 1 on socket 0
> >>>EAL: Detected lcore 5 as core 1 on socket 1
> >>>EAL: Detected lcore 6 as core 1 on socket 2
> >>>EAL: Detected lcore 7 as core 1 on socket 3
> >>>EAL: Detected lcore 8 as core 2 on socket 0
> >>>EAL: Detected lcore 9 as core 2 on socket 1
> >>>EAL: Detected lcore 10 as core 2 on socket 2
> >>>EAL: Detected lcore 11 as core 2 on socket 3
> >>>EAL: Detected lcore 12 as core 3 on socket 0
> >>>EAL: Detected lcore 13 as core 3 on socket 1
> >>>EAL: Detected lcore 14 as core 3 on socket 2
> >>>EAL: Detected lcore 15 as core 3 on socket 3
> >>>EAL: Detected lcore 16 as core 4 on socket 0
> >>>EAL: Detected lcore 17 as core 4 on socket 1
> >>>EAL: Detected lcore 18 as core 4 on socket 2
> >>>EAL: Detected lcore 19 as core 4 on socket 3
> >>>EAL: Detected lcore 20 as core 5 on socket 0
> >>>EAL: Detected lcore 21 as core 5 on socket 1
> >>>EAL: Detected lcore 22 as core 5 on socket 2
> >>>EAL: Detected lcore 23 as core 5 on socket 3
> >>>EAL: Detected lcore 24 as core 6 on socket 0
> >>>EAL: Detected lcore 25 as core 6 on socket 1
> >>>EAL: Detected lcore 26 as core 6 on socket 2
> >>>EAL: Detected lcore 27 as core 6 on socket 3
> >>>EAL: Detected lcore 28 as core 7 on socket 0
> >>>EAL: Detected lcore 29 as core 7 on socket 1
> >>>EAL: Detected lcore 30 as core 7 on socket 2
> >>>EAL: Detected lcore 31 as core 7 on socket 3
> >>>EAL: Detected lcore 32 as core 0 on socket 0
> >>>EAL: Detected lcore 33 as core 0 on socket 1
> >>>EAL: Detected lcore 34 as core 0 on socket 2
> >>>EAL: Detected lcore 35 as core 0 on socket 3
> >>>EAL: Detected lcore 36 as core 1 on socket 0
> >>>EAL: Detected lcore 37 as core 1 on socket 1
> >>>EAL: Detected lcore 38 as core 1 on socket 2
> >>>EAL: Detected lcore 39 as core 1 on socket 3
> >>>EAL: Detected lcore 40 as core 2 on socket 0
> >>>EAL: Detected lcore 41 as core 2 on socket 1
> >>>EAL: Detected lcore 42 as core 2 on socket 2
> >>>EAL: Detected lcore 43 as core 2 on socket 3
> >>>EAL: Detected lcore 44 as core 3 on socket 0
> >>>EAL: Detected lcore 45 as core 3 on socket 1
> >>>EAL: Detected lcore 46 as core 3 on socket 2
> >>>EAL: Detected lcore 47 as core 3 on socket 3
> >>>EAL: Detected lcore 48 as core 4 on socket 0
> >>>EAL: Detected lcore 49 as core 4 on socket 1
> >>>EAL: Detected lcore 50 as core 4 on socket 2
> >>>EAL: Detected lcore 51 as core 4 on socket 3
> >>>EAL: Detected lcore 52 as core 5 on socket 0
> >>>EAL: Detected lcore 53 as core 5 on socket 1
> >>>EAL: Detected lcore 54 as core 5 on socket 2
> >>>EAL: Detected lcore 55 as core 5 on socket 3
> >>>EAL: Detected lcore 56 as core 6 on socket 0
> >>>EAL: Detected lcore 57 as core 6 on socket 1
> >>>EAL: Detected lcore 58 as core 6 on socket 2
> >>>EAL: Detected lcore 59 as core 6 on socket 3
> >>>EAL: Detected lcore 60 as core 7 on socket 0
> >>>EAL: Detected lcore 61 as core 7 on socket 1
> >>>EAL: Detected lcore 62 as core 7 on socket 2
> >>>EAL: Detected lcore 63 as core 7 on socket 3
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>--
> >>>Regards,
> >>>
> >>>Royce Niu
> >>>
> >>
> >>
> >>
> >>
> >>Regards,
> >>Keith
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>--
> >>Regards,
> >>
> >>Royce Niu
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>Regards,
> >>Keith
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>--
> >>Regards,
> >>
> >>Royce Niu
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>Regards,
> >>Keith
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>--
> >>Regards,
> >>
> >>Royce Niu
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >
> >
> >
> >
> >Regards,
> >Keith
> >
> >
> >
> >
> >
> >
> >
> >
> >--
> >Regards,
> >
> >Royce Niu
> >
> >
> >
> >
> >
> >
> >
>
>
> Regards,
> Keith
>
>
>


-- 
Regards,

Royce Niu


More information about the users mailing list