[dpdk-dev] Mellanox ConnectX-5 crashes and mbuf leak

Yongseok Koh yskoh at mellanox.com
Sat Oct 7 00:31:00 CEST 2017


Hi, Martin

Even though I had done quite serious tests before sending out the patch,
I figured out deadlock could happen if the Rx queue size is smaller. It is 128
by default in testpmd while I usually use 256.

I've fixed the bug and submitted a new patch [1], which actually reverts the
previous patch. So, you can apply the attached with disregarding the old one.

And I have also done extensive tests for this new patch but please let me know
your test results.

[1]
"net/mlx5: fix deadlock due to buffered slots in Rx SW ring"
at http://dpdk.org/dev/patchwork/patch/29847


Thanks,
Yongseok


diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.c b/drivers/net/mlx5/mlx5_rxtx_vec_sse.c
index aff3359..9d37954 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.c
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.c
@@ -549,7 +549,7 @@ rxq_replenish_bulk_mbuf(struct rxq *rxq, uint16_t n)
 {
        const uint16_t q_n = 1 << rxq->elts_n;
        const uint16_t q_mask = q_n - 1;
-       const uint16_t elts_idx = rxq->rq_ci & q_mask;
+       uint16_t elts_idx = rxq->rq_ci & q_mask;
        struct rte_mbuf **elts = &(*rxq->elts)[elts_idx];
        volatile struct mlx5_wqe_data_seg *wq = &(*rxq->wqes)[elts_idx];
        unsigned int i;
@@ -567,6 +567,11 @@ rxq_replenish_bulk_mbuf(struct rxq *rxq, uint16_t n)
                wq[i].addr = rte_cpu_to_be_64((uintptr_t)elts[i]->buf_addr +
                                              RTE_PKTMBUF_HEADROOM);
        rxq->rq_ci += n;
+       /* Prevent overflowing into consumed mbufs. */
+       elts_idx = rxq->rq_ci & q_mask;
+       for (i = 0; i < MLX5_VPMD_DESCS_PER_LOOP; i += 2)
+               _mm_storeu_si128((__m128i *)&(*rxq->elts)[elts_idx + i],
+                                _mm_set1_epi64x((uintptr_t)&rxq->fake_mbuf));
        rte_wmb();
        *rxq->rq_db = rte_cpu_to_be_32(rxq->rq_ci);
 }



> On Oct 6, 2017, at 7:10 AM, Martin Weiser <martin.weiser at allegro-packets.com> wrote:
> 
> Hi Yongseok,
> 
> unfortunately in a quick test using testpmd and ~20Gb/s of traffic with
> your patch traffic forwarding always stops completely after a few seconds.
> 
> I wanted to test this with the current master of dpdk-next-net but after
> "net/mlx5: support upstream rdma-core" it will not compile against
> MLNX_OFED_LINUX-4.1-1.0.2.0.
> So i used the last commit before that (v17.08-306-gf214841) and applied
> your patch leading to the result described above.
> Apart from your patch no other modifications were made and without the
> patch testpmd forwards the traffic without a problem (in this
> configuration mbufs should never run out so this test was never affected
> by the original issue).
> 
> For this test I simply used testpmd with the following command line:
> "testpmd -c 0xfe -- -i" and issued the "start" command. As traffic
> generator I used t-rex with the sfr traffic profile.
> 
> Best regards,
> Martin
> 
> 
> 
> On 05.10.17 23:46, Yongseok Koh wrote:
>> Hi, Martin
>> 
>> Thanks for your thorough and valuable reporting. We could reproduce it. I found
>> a bug and fixed it. Please refer to the patch [1] I sent to the mailing list.
>> This might not be automatically applicable to v17.08 as I rebased it on top of
>> Nelio's flow cleanup patch. But as this is a simple patch, you can easily apply
>> it manually.
>> 
>> Thanks,
>> Yongseok
>> 
>> [1] https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdpdk.org%2Fdev%2Fpatchwork%2Fpatch%2F29781&data=02%7C01%7Cyskoh%40mellanox.com%7C61eea153c6ca4966b26c08d50cc3f763%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636428958171139449&sdata=d%2BEj79F%2BRZ03rkREti%2Fhaw9pYl8kF5bG7CkhK1kGQSs%3D&reserved=0
>> 
>>> On Sep 26, 2017, at 2:23 AM, Martin Weiser <martin.weiser at allegro-packets.com> wrote:
>>> 
>>> Hi,
>>> 
>>> we are currently testing the Mellanox ConnectX-5 100G NIC with DPDK
>>> 17.08 as well as dpdk-net-next and are
>>> experiencing mbuf leaks as well as crashes (and in some instances even
>>> kernel panics in a mlx5 module) under
>>> certain load conditions.
>>> 
>>> We initially saw these issues only in our own DPDK-based application and
>>> it took some effort to reproduce this
>>> in one of the DPDK example applications. However with the attached patch
>>> to the load-balancer example we can
>>> reproduce the issues reliably.
>>> 
>>> The patch may look weird at first but I will explain why I made these
>>> changes:
>>> 
>>> * the sleep introduced in the worker threads simulates heavy processing
>>> which causes the software rx rings to fill
>>>  up under load. If the rings are large enough (I increased the ring
>>> size with the load-balancer command line option
>>>  as you can see in the example call further down) the mbuf pool may run
>>> empty and I believe this leads to a malfunction
>>>  in the mlx5 driver. As soon as this happens the NIC will stop
>>> forwarding traffic, probably because the driver
>>>  cannot allocate mbufs for the packets received by the NIC.
>>> Unfortunately when this happens most of the mbufs will
>>>  never return to the mbuf pool so that even when the traffic stops the
>>> pool will remain almost empty and the
>>>  application will not forward traffic even at a very low rate.
>>> 
>>> * the use of the reference count in the mbuf in addition to the
>>> situation described above is what makes the
>>>  mlx5 DPDK driver crash almost immediately under load. In our
>>> application we rely on this feature to be able to forward
>>>  the packet quickly and still send the packet to a worker thread for
>>> analysis and finally free the packet when analysis is
>>>  done. Here I simulated this by increasing the mbuf reference count
>>> immediately after receiving the mbuf from the
>>>  driver and then calling rte_pktmbuf_free in the worker thread which
>>> should only decrement the reference count again
>>>  and not actually free the mbuf.
>>> 
>>> We executed the patched load-balancer application with the following
>>> command line:
>>> 
>>>    ./build/load_balancer -l 3-7 -n 4 -- --rx "(0,0,3),(1,0,3)" --tx
>>> "(0,3),(1,3)" --w "4" --lpm "16.0.0.0/8=>0; 48.0.0.0/8=>1;" --pos-lb 29
>>> --rsz "1024, 32768, 1024, 1024"
>>> 
>>> Then we generated traffic using the t-rex traffic generator and the sfr
>>> test case. On our machine the issues start
>>> to happen when the traffic exceeds ~6 Gbps but this may vary depending
>>> on how powerful the test machine is (by
>>> the way we were able to reproduce this on different types of hardware).
>>> 
>>> A typical stacktrace looks like this:
>>> 
>>>    Thread 1 "load_balancer" received signal SIGSEGV, Segmentation fault.
>>>    0x0000000000614475 in _mm_storeu_si128 (__B=..., __P=<optimized
>>> out>) at /usr/lib/gcc/x86_64-linux-gnu/5/include/emmintrin.h:716
>>>    716      __builtin_ia32_storedqu ((char *)__P, (__v16qi)__B);
>>>    (gdb) bt
>>>    #0  0x0000000000614475 in _mm_storeu_si128 (__B=..., __P=<optimized
>>> out>) at /usr/lib/gcc/x86_64-linux-gnu/5/include/emmintrin.h:716
>>>    #1  rxq_cq_decompress_v (elts=0x7fff3732bef0, cq=0x7ffff7f99380,
>>> rxq=0x7fff3732a980) at
>>> /root/dpdk-next-net/drivers/net/mlx5/mlx5_rxtx_vec_sse.c:679
>>>    #2  rxq_burst_v (pkts_n=<optimized out>, pkts=0xa7c7b0 <app+432944>,
>>> rxq=0x7fff3732a980) at
>>> /root/dpdk-next-net/drivers/net/mlx5/mlx5_rxtx_vec_sse.c:1242
>>>    #3  mlx5_rx_burst_vec (dpdk_rxq=0x7fff3732a980, pkts=<optimized
>>> out>, pkts_n=<optimized out>) at
>>> /root/dpdk-next-net/drivers/net/mlx5/mlx5_rxtx_vec_sse.c:1277
>>>    #4  0x000000000043c11d in rte_eth_rx_burst (nb_pkts=3599,
>>> rx_pkts=0xa7c7b0 <app+432944>, queue_id=0, port_id=0 '\000')
>>>    at
>>> /root/dpdk-next-net//x86_64-native-linuxapp-gcc/include/rte_ethdev.h:2781
>>>    #5  app_lcore_io_rx (lp=lp at entry=0xa7c700 <app+432768>,
>>> n_workers=n_workers at entry=1, bsz_rd=bsz_rd at entry=144,
>>> bsz_wr=bsz_wr at entry=144, pos_lb=pos_lb at entry=29 '\035')
>>>    at /root/dpdk-next-net/examples/load_balancer/runtime.c:198
>>>    #6  0x0000000000447dc0 in app_lcore_main_loop_io () at
>>> /root/dpdk-next-net/examples/load_balancer/runtime.c:485
>>>    #7  app_lcore_main_loop (arg=<optimized out>) at
>>> /root/dpdk-next-net/examples/load_balancer/runtime.c:669
>>>    #8  0x0000000000495e8b in rte_eal_mp_remote_launch ()
>>>    #9  0x0000000000441e0d in main (argc=<optimized out>,
>>> argv=<optimized out>) at
>>> /root/dpdk-next-net/examples/load_balancer/main.c:99
>>> 
>>> The crash does not always happen at the exact same spot but in our tests
>>> always in the same function.
>>> In a few instances instead of an application crash the system froze
>>> completely with what appeared to be a kernel
>>> panic. The last output looked like a crash in the interrupt handler of a
>>> mlx5 module but unfortunately I cannot
>>> provide the exact output right now.
>>> 
>>> All tests were performed under Ubuntu 16.04 server running a
>>> 4.4.0-96-generic kernel and the lasted Mellanox OFED
>>> MLNX_OFED_LINUX-4.1-1.0.2.0-ubuntu16.04-x86_64 was used.
>>> 
>>> Any help with this issue is greatly appreciated.
>>> 
>>> Best regards,
>>> Martin
>>> 
>>> <test.patch>
> 



More information about the dev mailing list