[dpdk-dev] Question about zero length segments in received mbuf

Bruce Richardson bruce.richardson at intel.com
Fri Oct 16 16:00:02 CEST 2015


On Fri, Oct 16, 2015 at 02:32:15PM +0100, Tom Kiely wrote:
> Hi,
>     I am currently experiencing a serious issue and was hoping someone else
> might have encountered it.
> 
> I have a KVM VM using two ixgbe interfaces A and B (configured to use PCI
> passthrough) and forwarding traffic from interface A via B.
> At about 4 million pps of 64 byte frames, the rx driver
> ixgbe_recv_scattered_pkts_vec() appears to be generating mbufs with 2
> segments, the first of which has data_len ==0 and the second data_len==64.
> The real problem is that when ixgbe_xmit_pkts() on the tx side gets about 18
> of these packets, it seems to mess up the transmit descriptor handling.
> ixgbe_xmit_cleanup() never sees the STAT_DD bit set and no descriptor get
> freed leading to total traffic loss.
> 
> I'm still debugging the xmit side to find out what's causing the descriptor
> ring problem.
> 
> Has anyone encountered the rx side zero-length-segment issue ? I found a
> reference to such an issue on the web but it was years old.
> 
> I'm using DPDK 1.8.0.
> 
> Any information gratefully received,
>    Tom

Hi Tom,

on the TX side, if these two-segment packets are getting sent to the NIC, you
probably want to make sure that the TX code is set up to handle multi-segment
packets. By default in most drivers, the NO_MULTISEG flag is set on queue
initialization.

/Bruce


More information about the dev mailing list