[dpdk-dev] Kernel deadlock due to rte_kni

Neil Horman nhorman at tuxdriver.com
Thu Mar 26 11:46:24 CET 2015


On Wed, Mar 25, 2015 at 07:39:49PM +0000, Dey, Souvik wrote:
> Hi All,
>                 There looks like an issue will rte_kni.ko which gets kernel into deadlock. We are trying to run rte_kni.ko with multiple thread support which are pinned to different non-isolated cores. When we test with tcp/tls the kernel is getting hanged in on race condition. Below is the kernel stack trace.
> 
> PID: 19942  TASK: ffff880227a71950  CPU: 3   COMMAND: "CE_2N_Comp_SamP"
> #0 [ffff88043fd87ec0] crash_nmi_callback at ffffffff8101d4a8
> -- MORE --  forward: <SPACE>, <ENTER> or j  backward: b or k  quit: q
> #1 [ffff88043fd87ed0] notifier_call_chain at ffffffff81055b68
> #2 [ffff88043fd87f00] notify_die at ffffffff81055be0
> #3 [ffff88043fd87f30] do_nmi at ffffffff81009ddd
> #4 [ffff88043fd87f50] nmi at ffffffff812ea9d0
>     [exception RIP: _raw_spin_lock_bh+25]
>     RIP: ffffffff812ea2a4  RSP: ffff880189439c88  RFLAGS: 00000293
>     RAX: 0000000000005b59  RBX: ffff880291708ec8  RCX: 000000000000045a
>     RDX: ffff880189439d90  RSI: 0000000000000000  RDI: ffff880291708ec8
>     RBP: ffff880291708e80   R8: 00000000047fef78   R9: 0000000000000001
>     R10: 0000000000000009  R11: ffffffff8126c658  R12: ffff880423799a40
>     R13: ffff880189439e08  R14: 000000000000045a  R15: 0000000000000017
>     ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
> --- <NMI exception stack> ---
> #5 [ffff880189439c88] _raw_spin_lock_bh at ffffffff812ea2a4
> #6 [ffff880189439c90] lock_sock_nested at ffffffff8122e948
> #7 [ffff880189439ca0] tcp_sendmsg at ffffffff8126c676
> #8 [ffff880189439d50] sock_aio_write at ffffffff8122bb12
> #9 [ffff880189439e00] do_sync_write at ffffffff810c61c6
> #10 [ffff880189439f10] vfs_write at ffffffff810c68a9
> #11 [ffff880189439f40] sys_write at ffffffff810c6dfe
> #12 [ffff880189439f80] system_call_fastpath at ffffffff812eab92
>     RIP: 00007fc7909bc0ed  RSP: 00007fc787ffe108  RFLAGS: 00000202
>     RAX: 0000000000000001  RBX: ffffffff812eab92  RCX: 00007fc7880aa170
>     RDX: 000000000000045a  RSI: 0000000004d56546  RDI: 000000000000002b
>     RBP: 0000000004d56546   R8: 00000000047fef78   R9: 0000000000000001
>     R10: 0000000000000009  R11: 0000000000000293  R12: 0000000004d56546
>     R13: 000000000483de10  R14: 000000000000045a  R15: 00000001880008b0
>     ORIG_RAX: 0000000000000001  CS: 0033  SS: 002b
> 
> PID: 3598   TASK: ffff88043db21310  CPU: 1   COMMAND: "kni_pkt0"
> #0 [ffff88043fc87ec0] crash_nmi_callback at ffffffff8101d4a8
> #1 [ffff88043fc87ed0] notifier_call_chain at ffffffff81055b68
> #2 [ffff88043fc87f00] notify_die at ffffffff81055be0
> #3 [ffff88043fc87f30] do_nmi at ffffffff81009ddd
> #4 [ffff88043fc87f50] nmi at ffffffff812ea9d0
>     [exception RIP: _raw_spin_lock+16]
>     RIP: ffffffff812ea0b1  RSP: ffff88043fc83e78  RFLAGS: 00000297
>     RAX: 0000000000005a59  RBX: ffff880291708e80  RCX: 0000000000000001
>     RDX: ffff88043fc83ec0  RSI: 0000000000002f82  RDI: ffff880291708ec8
>     RBP: ffff88043d8f4000   R8: ffffffff813a8d20   R9: 0000000000000001
>     R10: ffff88043d9d8098  R11: ffffffff8101e62a  R12: ffffffff81279a3c
>     R13: ffff88042d9b3fd8  R14: ffff880291708e80  R15: ffff88043fc83ec0
>     ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
> --- <NMI exception stack> ---
> #5 [ffff88043fc83e78] _raw_spin_lock at ffffffff812ea0b1
> #6 [ffff88043fc83e78] tcp_delack_timer at ffffffff81279a4e
> #7 [ffff88043fc83e98] run_timer_softirq at ffffffff8104642d
> #8 [ffff88043fc83f08] __do_softirq at ffffffff81041539
> #9 [ffff88043fc83f48] call_softirq at ffffffff812ebd9c
> #10 [ffff88043fc83f60] do_softirq at ffffffff8100b037
> #11 [ffff88043fc83f80] irq_exit at ffffffff8104185a
> #12 [ffff88043fc83f90] smp_apic_timer_interrupt at ffffffff8101eaef
> #13 [ffff88043fc83fb0] apic_timer_interrupt at ffffffff812eb553
> --- <IRQ stack> ---
> #14 [ffff88042d9b3ae8] apic_timer_interrupt at ffffffff812eb553
>     [exception RIP: tcp_rcv_established+1732]
>     RIP: ffffffff812756ab  RSP: ffff88042d9b3b90  RFLAGS: 00000202
>     RAX: 0000000000000020  RBX: ffff88042d86f470  RCX: 000000000000020a
>     RDX: ffff8801fc163864  RSI: ffff88032f8b2380  RDI: ffff880291708e80
>     RBP: ffff88032f8b2380   R8: ffff88032f8b2380   R9: ffffffff81327a60
>     R10: 000000000000000e  R11: ffffffff8112ae8f  R12: ffffffff812eb54e
>     R13: ffffffff8122ea88  R14: 000000000000010c  R15: 00000000000005a8
> -- MORE --  forward: <SPACE>, <ENTER> or j  backward: b or k  quit: q
>     ORIG_RAX: ffffffffffffff10  CS: 0010  SS: 0018
> #15 [ffff88042d9b3bd8] tcp_v4_do_rcv at ffffffff8127b483
> #16 [ffff88042d9b3c48] tcp_v4_rcv at ffffffff8127d89f
> #17 [ffff88042d9b3cb8] ip_local_deliver_finish at ffffffff81260cc2
> #18 [ffff88042d9b3cd8] __netif_receive_skb at ffffffff81239de6
> #19 [ffff88042d9b3d28] netif_receive_skb at ffffffff81239e7f
> #20 [ffff88042d9b3d58] kni_net_rx_normal at ffffffffa022b06f [rte_kni]
> #21 [ffff88042d9b3ec8] kni_thread_multiple at ffffffffa022a2df [rte_kni]
> #22 [ffff88042d9b3ee8] kthread at ffffffff81051b27
> #23 [ffff88042d9b3f48] kernel_thread_helper at ffffffff812ebca4
> 
> 
> One further investigation, I found that in the file kni_net.c , function kni_net_rx_normal(), we are calling netif_receive_skb(), which normally is called in the softirq context in the kernel but in this case we are calling in normal case. On that same core if now we receive a softirq for the same tcp socket then we are going into deadload as when rte_kni has called the kernel function we have not disabled the softirq on the core. Now I have 2 questions regarding this
> 
> 1.       Why are we using netif_receive_skb(), is there any particular reason for this?
> 
Because its a bug.
> 2.       Normally all driver code wil be calling netif_rx to post the packets in the backlog and raises a softirq to do further processing up the stack. Why kni is not following the same ?
> 
Again, looks like a bug, I think what you want there is netif_rx_ni instead of
netif_receive_skb

> Can someone confirm that what should be done in this regard ? Awaiting the reply asap as we are blocked on this.
> 
> --
> Regards,
> Souvik
> 


More information about the dev mailing list