[dpdk-stable] [DPDK] net/virtio: fix check scatter on all Rx queues

Peng, ZhihongX zhihongx.peng at intel.com
Wed Sep 22 10:13:17 CEST 2021



> -----Original Message-----
> From: Kevin Traynor <ktraynor at redhat.com>
> Sent: Wednesday, September 22, 2021 1:45 AM
> To: David Marchand <david.marchand at redhat.com>; Peng, ZhihongX
> <zhihongx.peng at intel.com>
> Cc: Xia, Chenbo <chenbo.xia at intel.com>; Maxime Coquelin
> <maxime.coquelin at redhat.com>; dev <dev at dpdk.org>; Ivan Ilchenko
> <ivan.ilchenko at oktetlabs.ru>; dpdk stable <stable at dpdk.org>; Christian
> Ehrhardt <christian.ehrhardt at canonical.com>; Xueming(Steven) Li
> <xuemingl at nvidia.com>; Luca Boccassi <bluca at debian.org>
> Subject: Re: [dpdk-stable] [DPDK] net/virtio: fix check scatter on all Rx
> queues
> 
> On 15/09/2021 19:37, David Marchand wrote:
> > On Wed, Aug 4, 2021 at 10:36 AM <zhihongx.peng at intel.com> wrote:
> >>
> >> From: Zhihong Peng <zhihongx.peng at intel.com>
> >>
> >> This patch fixes the wrong way to obtain virtqueue.
> >> The end of virtqueue cannot be judged based on whether the array is
> >> NULL.
> >
> > Indeed, good catch.
> >
> > I can reproduce a crash with v20.11.3 which has backport of 4e8169eb0d2d.
> > I can not see it with main: maybe due to a lucky allocation or size
> > requested to rte_zmalloc... ?
> >

This problem was discovered through google asan, we have submitted DPDK ASan patch.
http://patchwork.dpdk.org/project/dpdk/patch/20210918074155.872358-1-zhihongx.peng@intel.com/


> > The usecase is simple, I am surprised no validation caught it.
> >
> > # gdb ./build/app/dpdk-testpmd -ex 'run --vdev
> > net_virtio_user0,path=/dev/vhost-net,iface=titi,queues=3 -a 0:0:0.0 --
> > -i'
> >
> > ...
> >
> > Thread 1 "dpdk-testpmd" received signal SIGSEGV, Segmentation fault.
> > virtio_rx_mem_pool_buf_size (mp=0x110429983) at
> > ../drivers/net/virtio/virtio_ethdev.c:873
> > 873        return rte_pktmbuf_data_room_size(mp) -
> RTE_PKTMBUF_HEADROOM;
> > Missing separate debuginfos, use: yum debuginfo-install
> > elfutils-libelf-0.182-3.el8.x86_64 libbpf-0.2.0-1.el8.x86_64
> > (gdb) bt
> > #0  virtio_rx_mem_pool_buf_size (mp=0x110429983) at
> > ../drivers/net/virtio/virtio_ethdev.c:873
> > #1  0x0000000000e370d1 in virtio_check_scatter_on_all_rx_queues
> > (frame_size=1530, dev=0x1799a40 <rte_eth_devices>) at
> > ../drivers/net/virtio/virtio_ethdev.c:907
> > #2  virtio_mtu_set (dev=0x1799a40 <rte_eth_devices>, mtu=<optimized
> > out>) at ../drivers/net/virtio/virtio_ethdev.c:938
> > #3  0x00000000008c30e5 in rte_eth_dev_set_mtu
> > (port_id=port_id at entry=0, mtu=<optimized out>) at
> > ../lib/librte_ethdev/rte_ethdev.c:3484
> > #4  0x00000000006a61d8 in update_jumbo_frame_offload
> > (portid=portid at entry=0) at ../app/test-pmd/testpmd.c:3371
> > #5  0x00000000006a62bc in init_config_port_offloads (pid=0,
> > socket_id=0) at ../app/test-pmd/testpmd.c:1416
> > #6  0x000000000061770c in init_config () at
> > ../app/test-pmd/testpmd.c:1505
> > #7  main (argc=<optimized out>, argv=<optimized out>) at
> > ../app/test-pmd/testpmd.c:3800
> > (gdb) f 1
> > #1  0x0000000000e370d1 in virtio_check_scatter_on_all_rx_queues
> > (frame_size=1530, dev=0x1799a40 <rte_eth_devices>) at
> > ../drivers/net/virtio/virtio_ethdev.c:907
> > 907            buf_size = virtio_rx_mem_pool_buf_size(rxvq->mpool);
> > (gdb) p hw->max_queue_pairs
> > $1 = 3
> > (gdb) p qidx
> > $2 = 5
> > (gdb) p hw->vqs[0]
> > $3 = (struct virtqueue *) 0x17ffb03c0
> > (gdb) p hw->vqs[2]
> > $4 = (struct virtqueue *) 0x17ff9dcc0
> > (gdb) p hw->vqs[4]
> > $5 = (struct virtqueue *) 0x17ff8acc0
> > (gdb) p hw->vqs[6]
> > $6 = (struct virtqueue *) 0x17ff77cc0
> > (gdb) p hw->vqs[7]
> > $7 = (struct virtqueue *) 0x0
> > (gdb) p hw->vqs[8]
> > $8 = (struct virtqueue *) 0x100004ac0
> > (gdb) p hw->vqs[9]
> > $9 = (struct virtqueue *) 0x17ffb1600
> > (gdb) p hw->vqs[10]
> > $10 = (struct virtqueue *) 0x17ffb18c0
> >
> >
> >
> 
> For reference, also observed when 20.11.3 is paired with OVS
> 
> https://mail.openvswitch.org/pipermail/ovs-dev/2021-
> September/387940.html



More information about the stable mailing list