[dpdk-users] Pktgen with bonding port
Vincent Li
vincent.mc.li at gmail.com
Thu Dec 29 22:11:29 CET 2016
forgot to CC users, for sake of user reference
On Thu, Dec 29, 2016 at 12:49 PM, Wiles, Keith <keith.wiles at intel.com>
wrote:
>
> > On Dec 29, 2016, at 2:39 PM, Vincent Li <vincent.mc.li at gmail.com> wrote:
> >
> >
> >
> >
> > >
> > > Here is a command line I used to for 8 port/ 2 bonds of 4 ports each.
> > >
> > > ./app/app/x86_64-native-linuxapp-gcc/app/pktgen -l 1-3,18-19 -n 4
> --proc-type auto --log-level 8 --socket-mem 4096,4096 --file-prefix pg
> --vdev=net_bonding0,mode=4,xmit_policy=l23,slave=0000:04:
> 00.0,slave=0000:04:00.1,slave=0000:04:00.2,slave=0000:04:00.3
> --vdev=net_bonding1,mode=4,xmit_policy=l23,slave=0000:81:
> 00.0,slave=0000:81:00.1,slave=0000:81:00.2,slave=0000:81:00.3 -b 05:00.0
> -b 05:00.1 -b 82:00.0 -b 83:00.0 -- -T -P --crc-strip -m [2:3].0 -m
> [18:19].1 -f themes/black-yellow.theme
> > >
> >
> > just for clarification, the -m[2:3].0 -m [18:19].1, here the .0 means
> port id for the vdev net_bonding0 and .1 means for vdev net_bonding1,
> correct? so in my case, I run it like
> >
> > #./app/app/x86_64-native-linuxapp-gcc/pktgen -c 0xff
> --vdev=net_bonding0,mode=4,xmit_policy=l34,slave=0000:04:00.1,slave=0000:04:00.0
> -- -P -m [0:1-7].0,
> >
> > I am confused should it be -m[0:1-7].0 or -m[0:107].2 since Pktgen has
> "port 2" I colored in red as output :
> >
> >
> > EAL: Initializing pmd_bond for net_bonding0
> > PMD: Using mode 4, it is necessary to do TX burst and RX burst at least
> every 100ms.
> > EAL: Create bonded device net_bonding0 on port 2 in mode 4 on socket 0.
>
> When you add bonding ports to DPDK the bounding ports appear first meaning
> if you have 2 bond points and 4 real ports, which you assign 2 to each bond.
>
> If you type ‘page cpu’ I think I display the know ports also the ‘page
> stats’ displays the bond ports and physical port stats.
>
> In DPDK the ports are numbered 0 - N starting with the bonding ports:
> 0 - net_bond0
> 1 - net_bond1
> 2 - port 0 on the PCI bus using lspci excluding blacklisted ports
> 3 - port 1
> 4 - port 2
> 5 - port 3
>
> Interesting, but somehow does not agree with my running example:
this time I run -m[0:1-7].2 as
./app/app/x86_64-native-linuxapp-gcc/pktgen -c 0xff
--vdev=net_bonding0,mode=0,xmit_policy=l34,slave=0000:04:00.1,slave=0000:04:00.0
-- -P -m [0:1-7].2
Pktgen > load bond2
Pktgen > page stats
/ <Real Port Stats Page> Copyright (c) <2010-2016>, Intel
Corporation
Port Name Pkts Rx/Tx Rx Errors/Missed
Rate Rx/Tx MAC Address
0-0000:04:00.0: 0/0 0/0
0/0 E8:EA:6A:06:1B:1B
1-0000:04:00.1: 15/0 0/0
0/0 E8:EA:6A:06:1B:1B
2-net_bonding0: 15/0 0/0
0/0 E8:EA:6A:06:1B:1B
-- Pktgen Ver: 3.1.0 (DPDK 17.02.0-rc0) Powered by Intel® DPDK
---------------
Pktgen> start 2
Pktgen>page stats
/ <Real Port Stats Page> Copyright (c) <2010-2016>, Intel
Corporation
Port Name Pkts Rx/Tx Rx Errors/Missed
Rate Rx/Tx MAC Address
0-0000:04:00.0: 511/1572001177 0/190420
0/8104641 E8:EA:6A:06:1B:1B
1-0000:04:00.1: 511/1572001819 0/245007
0/8104667 E8:EA:6A:06:1B:1B
2-net_bonding0: 1022/3144007979 0/435427
0/16209311 E8:EA:6A:06:1B:1B
-- Pktgen Ver: 3.1.0 (DPDK 17.02.0-rc0) Powered by Intel® DPDK
---------------
it looks to me DPDK think net_bonding0 as port 2
this time I can see packet pass through both link
though the throughput is still under 10Gbit when viewing from BIGIP side:
# tmsh show sys performance throughput
Sys::Performance Throughput
-----------------------------------------------------------------------------
Throughput(bits)(bits/sec) Current Average Max(since 12/29/16
10:11:46)
-----------------------------------------------------------------------------
Service 817.1K 116.9K
866.6K
In 7.9G 1.2G
8.7G
Out 1.1M 162.7K
1.2M
> Pktgen uses the above numbering scheme so port 0 is net_bond0 and port 1
> is net_bond1. The other four ports are bonded to net_bond0-1 and can not be
> used directly any more from Pktgen.
>
> >
>
> Regards,
> Keith
>
>
More information about the users
mailing list