[dpdk-users] Issue with Pktgen and OVS-DPDK

Chen, Junjie J junjie.j.chen at intel.com
Tue Jan 9 14:00:16 CET 2018


Hi 
There are two defects may cause this issue:

1) in pktgen, see this patch [dpdk-dev] [PATCH] pktgen-dpdk: fix low performance in VM virtio pmd mode
diff --git a/lib/common/mbuf.h b/lib/common/mbuf.h
index 759f95d..93065f6 100644
— a/lib/common/mbuf.h
+++ b/lib/common/mbuf.h
@@ -18,6 +18,7 @@ pktmbuf_reset(struct rte_mbuf *m)
m->nb_segs = 1;
m->port = 0xff;

+	m->data_len = m->pkt_len;
m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
RTE_PKTMBUF_HEADROOM : m->buf_len;
}

2) in virtio_rxtx.c, please see commit f1216c1eca5a5. net/virtio: fix Tx packet length stats

You could patch both these two patch to try it.

Cheers
JJ


> -----Original Message-----
> From: users [mailto:users-bounces at dpdk.org] On Behalf Of Hu, Xuekun
> Sent: Tuesday, January 9, 2018 2:38 PM
> To: Wiles, Keith <keith.wiles at intel.com>; Gabriel Ionescu
> <Gabriel.Ionescu at enea.com>; Tan, Jianfeng <jianfeng.tan at intel.com>
> Cc: users at dpdk.org
> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> 
> Hi, Keith
> 
> Any updates on this issue? We met similar behavior that ovs-dpdk reports they
> receive packet with size increment 12 bytes until more than 1518, then pktgen
> stops sending packets, while we only ask pktgen to generate 64B packet. And
> it only happens with two vhost-user ports in same server. If the pktgen is
> running in another server, then no such issue.
> 
> We tested the lasted pktgen 3.4.6, and OVS-DPDK 2.8, with DPDK 17.11.
> 
> We also found qemu2.8.1 and qemu2.10 have this problem, while qemu 2.5
> has no such problem. So seems like it is a compatibility issue with
> pktgen/dpdk/qemu?
> 
> Thanks.
> Thx, Xuekun
> 
> -----Original Message-----
> From: users [mailto:users-bounces at dpdk.org] On Behalf Of Wiles, Keith
> Sent: Wednesday, May 03, 2017 4:24 AM
> To: Gabriel Ionescu <Gabriel.Ionescu at enea.com>
> Cc: users at dpdk.org
> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> 
> Comments inline:
> > On May 2, 2017, at 8:20 AM, Gabriel Ionescu <Gabriel.Ionescu at enea.com>
> wrote:
> >
> > Hi,
> >
> > I am using DPDK-Pktgen with an OVS bridge that has two vHost-user ports
> and I am seeing an issue where Pktgen does not look like it generates packets
> correctly.
> >
> > For this setup I am using DPDK 17.02, Pktgen 3.2.8 and OVS 2.7.0.
> >
> > The OVS bridge is created with:
> > ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
> > ovs-vsctl add-port ovsbr0 vhost-user1 -- set Interface vhost-user1
> > type=dpdkvhostuser ofport_request=1 ovs-vsctl add-port ovsbr0
> > vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
> > ofport_request=2 ovs-ofctl add-flow ovsbr0 in_port=1,action=output:2
> > ovs-ofctl add-flow ovsbr0 in_port=2,action=output:1
> >
> > DPDK-Pktgen is launched with the following command so that packets
> generated through port 0 are received by port 1 and viceversa:
> > pktgen -c 0xF --file-prefix pktgen --no-pci \
> >
> --vdev=virtio_user0,path=/tmp/vhost-user1 \
> >
> --vdev=virtio_user1,path=/tmp/vhost-user2 \
> >                                -- -P -m "[0:1].0, [2:3].1”
> 
> The above command line is wrong as Pktgen needs or takes the first lcore for
> display output and timers. I would not use -c -0xF, but -l 1-5 instead, as it is a
> lot easier to understand IMO. Using this option -l 1-5 you are using 5 lcores
> (skipping lcore 0 in a 6 lcore VM) one for Pktgen and 4 for the two ports. -m
> [2:3].0 -m [4:5].1 leaving lcore 1 for Pktgen to use and I am concerned you did
> not see some performance or lockup problem. I really need to add a test for
> these types of problem :-( You can just have 5 lcores for the VM, which then
> pktgen shares lcore 0 with Linux using -l 0-4 option.
> 
> Pktgen when requested to send 64 byte frames it sends 60 byte payload + 4
> byte Frame Checksum. This does work and it must be in how vhost-user is
> testing for the packet size. In the mbuf you have payload size and the buffer
> size. The Buffer size could be 1524, but the payload or frame size will be 60
> bytes as the 4 bytes FCS is appended to the frame by the hardware. It seems to
> me that vhost-user is not looking at the correct struct rte_mbuf member
> variable in its testing.
> 
> >
> > In Pktgen, the default settings are used for both ports:
> >
> > -          Tx Count: Forever
> >
> > -          Rate: 100%
> >
> > -          PktSize: 64
> >
> > -          Tx Burst: 32
> >
> > Whenever I start generating packets through one of the ports (in this
> example port 0 by running start 0), the OVS logs throw warnings similar to:
> > 2017-05-02T09:23:04.741Z|00022|netdev_dpdk(pmd9)|WARN|Dropped
> 1194956
> > log messages in last 49 seconds (most recently, 41 seconds ago) due to
> > excessive rate
> >
> 2017-05-02T09:23:04.741Z|00023|netdev_dpdk(pmd9)|WARN|vhost-user2:
> Too
> > big size 1524 max_packet_len 1518
> >
> 2017-05-02T09:23:04.741Z|00024|netdev_dpdk(pmd9)|WARN|vhost-user2:
> Too
> > big size 1524 max_packet_len 1518
> >
> 2017-05-02T09:23:04.741Z|00025|netdev_dpdk(pmd9)|WARN|vhost-user2:
> Too
> > big size 1524 max_packet_len 1518
> >
> 2017-05-02T09:23:04.741Z|00026|netdev_dpdk(pmd9)|WARN|vhost-user2:
> Too
> > big size 1524 max_packet_len 1518
> > 2017-05-02T09:23:15.761Z|00027|netdev_dpdk(pmd9)|WARN|Dropped
> 1344988
> > log messages in last 11 seconds (most recently, 0 seconds ago) due to
> > excessive rate
> >
> 2017-05-02T09:23:15.761Z|00028|netdev_dpdk(pmd9)|WARN|vhost-user2:
> Too
> > big size 57564 max_packet_len 1518 Port 1 does not receive any packets.
> >
> > When running Pktgen with the -socket-mem option (e.g. --socket-mem 512),
> the behavior is different, but with the same warnings thrown by OVS: port 1
> receives some packages, but with different sizes, even though they are
> generated on port 0 with a 64b size:
> >  Flags:Port      :   P--------------:0   P--------------:1
> > Link State        :       <UP-10000-FD>       <UP-10000-FD>
> ----TotalRate----
> > Pkts/s Max/Rx     :                 0/0             35136/0
> 35136/0
> >       Max/Tx     :        238144/25504                 0/0
> 238144/25504
> > MBits/s Rx/Tx     :             0/13270                 0/0
> 0/13270
> > Broadcast         :                   0                   0
> > Multicast         :                   0                   0
> >  64 Bytes        :                   0                 288
> >  65-127          :                   0                1440
> >  128-255         :                   0                2880
> >  256-511         :                   0                6336
> >  512-1023        :                   0               12096
> >  1024-1518       :                   0               12096
> > Runts/Jumbos      :                 0/0                 0/0
> > Errors Rx/Tx      :                 0/0                 0/0
> > Total Rx Pkts     :                   0               35136
> >      Tx Pkts     :             1571584                   0
> >      Rx MBs      :                   0                 227
> >      Tx MBs      :              412777                   0
> > ARP/ICMP Pkts     :                 0/0                 0/0
> >                  :
> > Pattern Type      :             abcd...             abcd...
> > Tx Count/% Rate   :       Forever /100%       Forever /100%
> > PktSize/Tx Burst  :           64 /   32           64 /   32
> > Src/Dest Port     :         1234 / 5678         1234 / 5678
> > Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
> > Dst  IP Address   :         192.168.1.1         192.168.0.1
> > Src  IP Address   :      192.168.0.1/24      192.168.1.1/24
> > Dst MAC Address   :   a6:71:4e:2f:ee:5d   b6:38:dd:34:b2:93
> > Src MAC Address   :   b6:38:dd:34:b2:93   a6:71:4e:2f:ee:5d
> > VendID/PCI Addr   :   0000:0000/00:00.0   0000:0000/00:00.0
> >
> > -- Pktgen Ver: 3.2.8 (DPDK 17.02.0)  Powered by Intel(r) DPDK
> > -------------------
> >
> > If packets are generated from an external source and testpmd is used to
> forward traffic between the two vHost-user ports, the warnings are not thrown
> by the OVS bridge.
> >
> > Should this setup work?
> > Is this an issue or am I setting something up wrong?
> >
> > Thank you,
> > Gabriel Ionescu
> 
> Regards,
> Keith



More information about the users mailing list