Bug 1312 - When iterating through the mbufs, mbuf->nb_segs indicates there are 21 segments, but when reaching the 8th mbuf, its mbuf->next pointer is NULL
Summary: When iterating through the mbufs, mbuf->nb_segs indicates there are 21 segmen...
Status: UNCONFIRMED
Alias: None
Product: DPDK
Classification: Unclassified
Component: core (show other bugs)
Version: 20.11
Hardware: ARM Linux
: Normal normal
Target Milestone: ---
Assignee: dev
URL:
Depends on:
Blocks:
 
Reported: 2023-11-09 13:34 CET by tingsong
Modified: 2023-11-10 23:34 CET (History)
1 user (show)



Attachments

Description tingsong 2023-11-09 13:34:01 CET
The version of DPDK is 20.11.

When iterating through the mbufs, mbuf->nb_segs indicates there are 21 segments, but when reaching the 8th mbuf, its mbuf->next pointer is NULL.

I used ASAN tool to check and found no memory out-of-bounds errors before encountering the NULL value in the mbuf's 'next' pointer.

My scenario involves sending approximately 30,000 large packets using a sender via the kernel protocol stack, where the network card's MTU is 1500. On the receiver side, I'm using DPDK to capture these packets. In Thread 1, I'm receiving the packets using rte_eth_rx_burst, then processing them through the IP fragment reassembly process. Upon receiving a complete packet, I enqueue the mbuf into an 'rte_ring' named 'test'. In Thread 2, I dequeue the mbufs from the 'test' rte_ring. I then iterate through these mbufs to copy data from each segment mbuf. However, I encounter an issue with some mbufs having a 'next' pointer set as NULL."
Comment 1 tingsong 2023-11-09 13:41:42 CET
The issue arises sporadically after receiving around 100 million packets
Comment 2 Dmitry Kozlyuk 2023-11-09 15:10:02 CET
Please tell the PMD and the exact HW model (CPU, NIC).
Comment 3 tingsong 2023-11-10 02:33:59 CET
(In reply to Dmitry Kozlyuk from comment #2)
> Please tell the PMD and the exact HW model (CPU, NIC).

The PMD is configured with igb_uio, the CPU in use is FT2000/64, and the network card is PS1600 on my end
Comment 4 tingsong 2023-11-10 02:42:47 CET
(In reply to Dmitry Kozlyuk from comment #2)
> Please tell the PMD and the exact HW model (CPU, NIC).

The DPDK driver for the PS1600 NIC is provided by the manufacturer of the NIC itself, based on DPDK v20.11.

Thanks
Comment 5 tingsong 2023-11-10 02:43:08 CET
(In reply to Dmitry Kozlyuk from comment #2)
> Please tell the PMD and the exact HW model (CPU, NIC).

The DPDK driver for the PS1600 NIC is provided by the manufacturer of the NIC itself, based on DPDK v20.11.

Thanks
Comment 6 tingsong 2023-11-10 02:43:53 CET
The PMD is configured with igb_uio, the CPU in use is FT2000/64, and the network card is PS1600 on my end.

The DPDK driver for the PS1600 NIC is provided by the manufacturer of the NIC itself, based on DPDK v20.11.

Thanks
Comment 7 Dmitry Kozlyuk 2023-11-10 10:24:38 CET
Seems to be related to:

#define RTE_LIBRTE_IP_FRAG_MAX_FRAG 8

PMD and NIC seem to be irrelevant, sorry for pointing in that direction.
Comment 8 tingsong 2023-11-10 23:34:45 CET
(In reply to Dmitry Kozlyuk from comment #7)
> Seems to be related to:
> 
> #define RTE_LIBRTE_IP_FRAG_MAX_FRAG 8
> 
> PMD and NIC seem to be irrelevant, sorry for pointing in that direction.

Thank you for your response. Originally, the value of RTE_LIBRTE_IP_FRAG_MAX_FRAG was set to 64 because sending a 30,000-byte packet through an MTU 1500 network card would result in approximately 21 fragments. I later changed RTE_LIBRTE_IP_FRAG_MAX_FRAG to 8 and increased the MTU of the sending NIC to 9000, but I still encountered the same issue.

Note You need to log in before you can comment on or make changes to this bug.