[dpdk-users] Any way to get more that 40Mpps with 64bytes using a XL710 40Gb nic
Andrew Theurer
atheurer at redhat.com
Mon Oct 2 13:48:16 CEST 2017
On Fri, Sep 29, 2017 at 3:53 PM, Mauricio Valdueza <mvaldueza at vmware.com>
wrote:
> Hi Guys
>
> Max theoretical value is 56.8 Mpps… but practical PCie limitations allow
> us to reach 42Mpps
>
Which PCI limitation?
>
> I am reaching 36Mpps, so where are the 6Mpps lost ( ?
>
Does your hypervisor use 1GB pages for the VM memory?
>
> Mau
>
> On 29/09/2017, 06:22, "Wiles, Keith" <keith.wiles at intel.com> wrote:
>
>
> > On Sep 28, 2017, at 9:40 PM, Andrew Theurer <atheurer at redhat.com>
> wrote:
> >
> > In our tests, ~36Mpps is the maximum we can get. We usually run a
> test with TRex, bidirectional, 2 pci cards, 1 port per x8 gen3 PCI adapter,
> with a device under test using same HW config but running testomd with 2 or
> more queues per port. Bidirectional aggregate traffic is in the 72Mpps
> range. So, in that test, each active port is transmitting and receiving
> ~36Mpps, however, I don't believe the received packets are DMA'd to memory,
> just counted on the adapter. I have never observed the Fortville doing
> higher than that.
>
> 40Gbits is the limit and I think 36Mpps is the MAX for the PCI I
> think, if I remember correctly. The t-rex must be counting differently as
> you stated. I need to ask some folks here.
>
> I have two 40G NICs, but at this time I do not have enough slots to
> put in the other 40 and keep my 10Gs in the system.
>
> I need to fix the problem below, but have not had the chance.
>
> >
> > -Andrew
> >
> > On Thu, Sep 28, 2017 at 3:59 PM, Wiles, Keith <keith.wiles at intel.com>
> wrote:
> >
> > > On Sep 28, 2017, at 6:06 AM, Mauricio Valdueza <
> mvaldueza at vmware.com> wrote:
> > >
> > > Hi Guys;
> > >
> > > I am testing a Fortville 40Gb nic with PKTgen
> > >
> > > I see linerate in 40Gb with 156B packet size, but once I decrease
> size, linerate is far away
> >
> > In Pktgen the packet count is taken from the hardware registers on
> the NIC and the bit rate is calculated using those values. Not all NICs
> flush the TX done queue and from one start command to the next the numbers
> can be off as the old packets are being recycled with the new size packets.
> Please try the different sizes and bring down pktgen between runs just to
> see if that is the problem.
> >
> > >
> > > WITH 158B
> > > Link State : <UP-40000-FD> ----TotalRate----
> > > Pkts/s Max/Rx : 0/0 0/0
> > > Max/Tx : 28090480/28089840 28090480/28089840
> > > MBits/s Rx/Tx : 0/40000 0/40000
> > > ------------------------------------------------------------
> -----------------------------------
> > >
> > > WITH 128B
> > > Link State : <UP-40000-FD> ----TotalRate----
> > > Pkts/s Max/Rx : 0/0 0/0
> > > Max/Tx : 33784179/33783908 33784179/33783908
> > > MBits/s Rx/Tx : 0/40000 0/40000
> > > ------------------------------------------------------------
> ------------------------------------
> > >
> > > With 64B
> > > Link State : <UP-40000-FD> ----TotalRate----
> > > Pkts/s Max/Rx : 0/0 0/0
> > > Max/Tx : 35944587/35941680 35944587/35941680
> > > MBits/s Rx/Tx : 0/24152 0/24152
> > > ------------------------------------------------------------
> ----------------------------------
> > >
> > > Should I run any optimization?
> > >
> > > My environment is:
> > >
> > > •VMware ESXi version: 6.5.0, 4887370
> > > •Exact NIC version: Intel Corporation XL710 for
> 40GbE QSFP+
> > > •NIC driver version: i40en version 1.3.1
> > > •Server Vendor: Dell
> > > •Server Make: Dell Inc. PowerEdge R730
> > > •CPU Model: I ntel(R) Xeon(R) CPU E5-2697
> v3 @ 2.60GHz
> > > •Huge pages size: 2M
> > > •Test VM: What is it? Ubuntu 16.04
> > > • DPDK is compiled there? dpdk-17.08
> > > •Test traffic kind: IP/UDP? Both tested
> > > Traffic generator: Intel pktgen version? pktgen-3.4.1
> > >
> > >
> > > I am executing:
> > >
> > > sudo ./app/x86_64-native-linuxapp-gcc/pktgen -c 0xff n 3
> --proc-type auto --socket-mem 9096 -- -m "[1:2-7].0" --crc-strip
> > >
> > >
> > > Thanks in advance
> > >
> > >
> > > mauricio
> > >
> >
> > Regards,
> > Keith
> >
> >
>
> Regards,
> Keith
>
>
>
>
More information about the users
mailing list