[dpdk-users] Mechanism to increase MBUF allocation

Neeraj Tandon (netandon) netandon at cisco.com
Thu May 18 22:21:27 CEST 2017


Hi,

Just for information and helping someone who comes across a similar issue.

The root cause was calling MBUF free in a Non EAL thread. The application
required delayed buffer free but doing it in a different thread  launched
via pthread create causes a corruption in mempool. Moving mbuf free to an
EAL thread solves the problem.

Thanks,
Neeraj


On 5/16/17, 8:27 PM, "users on behalf of Neeraj Tandon (netandon)"
<users-bounces at dpdk.org on behalf of netandon at cisco.com> wrote:

>Hi,
>
>I was able to increase mbuf and make it work after increasing the socket
>memory. However I am facing an issue of SEGfault in driver code.
>Intermittently after receiving sometimes few million packet at 1 Gig line
>rate the driver does a segment fault:
>(eth_igb_recv_pkts+0xd3)[0x5057a3]
>
>I have net_e1000_igb driver with two 1 Gig ports on it.
>
>Thanks in advance for any help or pointer  to debug driver .
>
>EAL: Detected 24 lcore(s)
>EAL: Probing VFIO support...
>EAL: VFIO support initialized
>EAL: PCI device 0000:01:00.0 on NUMA socket 0
>EAL:   probe driver: 8086:1521 net_e1000_igb
>EAL: PCI device 0000:01:00.1 on NUMA socket 0
>EAL:   probe driver: 8086:1521 net_e1000_igb
>
>Regards,
>Neeraj
>
>
>
>
>On 5/15/17, 12:14 AM, "users on behalf of Neeraj Tandon (netandon)"
><users-bounces at dpdk.org on behalf of netandon at cisco.com> wrote:
>
>>Hi,
>>
>>I have recently started using DPDK.  I have based my application on l2fwd
>>application.  In my application, I am holding buffers for  a period of
>>time and freeing the mbuf in another thread. The default number of MBUF
>>is 8192 . I have two questions regarding this:
>>
>>
>>  1.  How to increase number of MBUFS : For this increasing NB_MBUF and
>>calling  is not having any effect I.e I loose packet when packets > 8192
>>are sent in burst. I see following used for creating mbuf pool:
>>
>>/* create the mbuf pool */
>>l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
>>MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
>>rte_socket_id());
>>
>>If I want to increase MBUF to say 65536 what should I do ?
>>
>>      2. I am receiving packets in RX thread which is running on Core 2
>>and freeing on a thread which I launched using PHREAD and runs on Core 0
>>. Any implications for this kind of mechanism
>>
>>Thanks for the support and keeping forum active.
>>
>>Regards,
>>Neenah
>>
>



More information about the users mailing list