[dpdk-dev] Increasing number of txd and rxd from 256 to 1024 for virtio-net-pmd-1.1

James Yu ypyu2011 at gmail.com
Wed Nov 27 21:06:57 CET 2013


Can you share your virtio driver with me ?

Do you mean to create multiple queues, each has 256 txd/rxd ? The packets
could be stored into the freeslots in those queues. But how can the virtio
pmd codes feed the slots down to the hardware to deliver them ?

The other question is that I was using vhost-net on the KVM host. This
supposed to be transparent to the DPDK + virtio pmd codes. But this cause
problem in the packet delivery ?

Thanks



On Tue, Nov 26, 2013 at 10:26 PM, Stephen Hemminger <
stephen at networkplumber.org> wrote:

> On Tue, 26 Nov 2013 21:15:02 -0800
> James Yu <ypyu2011 at gmail.com> wrote:
>
> > Running one directional traffic from Spirent traffic generator to l2fwd
> > running inside a guest OS on a RHEL 6.2 KVM host, I encountered
> performance
> > issue and need to increase the number of rxd and txd from 256 to 1024.
> > There was not enough freeslots for packets to be transmitted in this
> routine
> >       virtio_send_packet(){
> >       ....
> >         if (tq->freeslots < nseg + 1) {
> >                 return -1;
> >         }
> >       ....
> >       }
> >
> > How do I solve the performance issue by one of the following
> > 1. increase the number of rxd and txd from 256 to 1024
> >         This should prevent packets could not be stored into the ring due
> > to lack of freeslots. But l2fwd fails to run and indicate the number must
> > be equal to 256.
> > 2. increase the MAX_PKT_BURST
> >         But this is not ideal since it will increase the delay while
> > improving the throughput
> > 3. other mechanism that you know can improve it ?
> >         Is there any other approach to have enough freeslots to store the
> > packets before passing down to PCI ?
> >
> >
> > Thanks
> >
> > James
> >
> >
> > This is the performance numbers I measured on the l2fwd printout for the
> > receiving part. I added codes inside l2fwd to do tx part.
> >
> ====================================================================================
> > vhost-net is enabled on KVM host, # of cache buffer 4096, Ubuntu 12.04.3
> > LTS (3.2.0-53-generic); kvm 1.2.0, libvirtd: 0.9.8
> > 64 Bytes/pkt from Spirent @ 223k pps, running test for 10 seconds.
> >
> ====================================================================================
> > DPDK 1.3 + virtio + 256 txd/rxd + nice -19 priority (l2fwd, guest kvm
> > process)
> > bash command: nice -n -19
> > /root/dpdk/dpdk-1.3.1r2/examples/l2fwd/build/l2fwd -c 3 -n 1 -b
> 000:00:03.0
> > -b 000:00:07.0 -b 000:00:0a.0 -b 000:00:09.0 -d
> > /root/dpdk/virtio-net-pmd-1.1/librte_pmd_virtio.so -- -q 1 -p 1
> >
> ====================================================================================
> > Spirent -> l2fwd (receiving 10G) (RX on KVM guest)
> >     MAX_PKT_BURST     10seconds (<1% loss)  Packets Per Second
> >
> -------------------------------------------------------------------------------------------------------------------------------
> >     32                              74k pps
> >     64                              80k pps
> >     128                           126kpps
> >     256                           133kpps
> >
> > l2fw -> Spirent (10G port) (transmitting) (using one-directional one port
> > (port 0) setup)
> >     MAX_PKT_BURST     < 1% packet loss
> >     32                             88kpp
> >
> >
> > **********************************
> > The same test run on e1000 ports
> >
> >
> ====================================================================================
> > DPDK 1.3 + e1000 + 1024 txd/rxd + nice -19 priority (l2fwd, guest kvm
> > process)
> > bash command: nice -n -19
> > /root/dpdk/dpdk-1.3.1r2/examples/l2fwd/build/l2fwd -c 3 -n 1 -b
> 000:00:03.0
> > -b 000:00:07.0 -b 000:00:0a.0 -b 000:00:09.0 -- -q 1 -p 1
> >
> ====================================================================================
> > Spirent -> l2fwd (RECEIVING 10G)
> >     MAX_PKT_BURST     <= 1% packet loss
> >     32                             110k pps
> >
> > l2fw -> Spirent (10G port) (TRANSMITTING) (using one-directional one port
> > (port 0) setup)
> >     MAX_PKT_BURST     pkts transmitted on l2fwd
> >     32                            171k pps (0% dropped)
> >     240                          203k pps (6% dropped, 130k pps received
> on
> > eth6 (assumed on Spirent)) **
> > **: not enough freeslots in tx ring
> > ==> this indicate the effects of small txd/rxd (256) when more traffic is
> > generated, the packets can not
> >     be sent due to lack of freeslots in tx ring. I guess this is the
> > symptom occurs in the virtio_net
>
> The number of slots with virtio is a parameter negotiated with the host.
> So unless the host (KVM) gives the device more slots, then it won't work.
> I have a better virtio driver and one of the features being added is
> multiqueue
> and merged TX buffer support which would give a bigger queue.
>
>


More information about the dev mailing list