[dpdk-users] Strange packet loss with multi-frame payloads

Shyam Shrivastav shrivastav.shyam at gmail.com
Tue Jul 18 07:50:32 CEST 2017


As I understand the problem disappears with 1 RX queue on server. You can
reduce number of queues on server from 8 and arrive at an optimal value
without packet loss.
For intel 82599 NIC packet loss is experienced with more than 4 RX queues,
this was reported in dpdk dev or user mailing list, read in archives
sometime back while looking for similar information with 82599.

On Tue, Jul 18, 2017 at 4:54 AM, Harold Demure <harold.demure87 at gmail.com>
wrote:

> Hello again,
>   I tried to convert my statically defined buffers into buffers allocated
> through rte_malloc (as discussed in the previous email, see quoted text).
> Unfortunately, the problem is still there :(
> Regards,
>   Harold
>
>
>
> >
> > 2. How do you know you have the packet loss?
> >
> >
> > *I know it because some fragmented packets never get reassembled fully.
> If
> > I print the packets seen by the server I see something like  "PCKT_ID 10
> > FRAG 250, PCKT_ID 10 FRAG 252". And FRAG 251 is never printed.*
> >
> > *Actually, something strange that happens sometimes is that a core
> > receives fragments of two packets and, say, receives   frag 1 of packet
> X,
> > frag 2 of packet Y, frag 3 of packet X, frag 4 of packet Y.*
> > *Or that, after "losing" a fragment for packet X, I only see printed
> > fragments with EVEN frag_id for that packet X. At least for a while.*
> >
> > *This led me also to consider a bug in my implementation (I don't
> > experience this problem if I run with a SINGLE client thread). However,
> > with smaller payloads, even fragmented, everything runs smoothly.*
> > *If you have any suggestions for tests to run to spot a possible bug in
> my
> > implementation, It'd be more than welcome!*
> >
> > *MORE ON THIS: the buffers in which I store the packets taken from RX are
> > statically defined arrays, like struct rte_mbuf*  temp_mbuf[SIZE].  SIZE
> > can be pretty high (say, 10K entries), and there are 3 of those arrays
> per
> > core. Can it be that, somehow, they mess up the memory layout (e.g., they
> > intersect)?*
> >
>


More information about the users mailing list