[dpdk-dev] Packet data out of bounds after rte_eth_rx_burst

Dor Green dorgreen1 at gmail.com
Wed Mar 25 10:32:04 CET 2015


After being able to see the codepath used in 1.8, I modified my
free_thresh and other flags so that Rx Burst Bulk alloc will be used.

This solved the problem (while also increasing performance). I'm not sure why.
This is good enough for me, but I'm willing to keep investigating if
it's of any interest to you.

On Wed, Mar 25, 2015 at 10:22 AM, Dor Green <dorgreen1 at gmail.com> wrote:
> The printout:
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 11, SFP+: 4
> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x154d
> PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f80c0af0e40
> hw_ring=0x7f811630ce00 dma_addr=0xf1630ce00
> PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc
> Preconditions: rxq->rx_free_thresh=0, RTE_PMD_IXGBE_RX_MAX_BURST=32
> PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
> not satisfied, Scattered Rx is requested, or
> RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC is not enabled (port=0, queue=0).
> PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc
> Preconditions: rxq->rx_free_thresh=0, RTE_PMD_IXGBE_RX_MAX_BURST=32
> PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f80c0af0900
> hw_ring=0x7f811631ce80 dma_addr=0xf1631ce80
> PMD: set_tx_function(): Using full-featured tx code path
> PMD: set_tx_function():  - txq_flags = 0 [IXGBE_SIMPLE_FLAGS=f01]
> PMD: set_tx_function():  - tx_rs_thresh = 32 [RTE_PMD_IXGBE_TX_MAX_BURST=32]
>
> Can't seem to get any example app to crash. Is there something I can
> run on one port which will look at the actual data of the packets?
>
> The mempool is (I think) set up normally:
>
> pktmbuf_pool = rte_mempool_create("mbuf_pool", MBUFNB, MBUFSZ, 0,
>                                   sizeof(struct rte_pktmbuf_pool_private),
>                                   rte_pktmbuf_pool_init, NULL,
>                                   rte_pktmbuf_init, NULL, NUMA_SOCKET, 0);
>
>
> For good measure, here's the rest of the port setup (shortened, in
> addition to what I showed below):
>
> static struct rte_eth_rxconf const rxconf = {
>     .rx_thresh = {
>         .pthresh = 8,
>         .hthresh = 8,
>         .wthresh = 100,
>     },
>     .rx_free_thresh = 0,
>     .rx_drop_en = 0,
> };
>
> rte_eth_dev_configure(port, 1, 1, &ethconf);
> rte_eth_rx_queue_setup(port, 0, hwsize, NUMA_SOCKET, &rxconf, pktmbuf_pool);
> rte_eth_dev_start(port);
>
>
> On Tue, Mar 24, 2015 at 6:21 PM, Bruce Richardson
> <bruce.richardson at intel.com> wrote:
>> On Tue, Mar 24, 2015 at 04:10:18PM +0200, Dor Green wrote:
>>> 1 . The eth_conf is:
>>>
>>> static struct rte_eth_conf const ethconf = {
>>>     .link_speed = 0,
>>>     .link_duplex = 0,
>>>
>>>     .rxmode = {
>>>         .mq_mode = ETH_MQ_RX_RSS,
>>>         .max_rx_pkt_len = ETHER_MAX_LEN,
>>>         .split_hdr_size = 0,
>>>         .header_split = 0,
>>>         .hw_ip_checksum = 0,
>>>         .hw_vlan_filter = 0,
>>>         .jumbo_frame = 0,
>>>         .hw_strip_crc = 0,   /**< CRC stripped by hardware */
>>>     },
>>>
>>>     .txmode = {
>>>     },
>>>
>>>     .rx_adv_conf = {
>>>         .rss_conf = {
>>>             .rss_key = NULL,
>>>             .rss_hf = ETH_RSS_IPV4 | ETH_RSS_IPV6,
>>>         }
>>>     },
>>>
>>>     .fdir_conf = {
>>>         .mode = RTE_FDIR_MODE_SIGNATURE,
>>>
>>>     },
>>>
>>>     .intr_conf = {
>>>         .lsc = 0,
>>>     },
>>> };
>>>
>>> I've tried setting jumbo frames on with a larger packet length and
>>> even turning off RSS/FDIR. No luck.
>>>
>>> I don't see anything relating to the port in the initial prints, what
>>> are you looking for?
>>
>> I'm looking for the PMD initialization text, like that shown below (from testpmd):
>> Configuring Port 0 (socket 0)
>> PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f9ba08cd700 hw_ring=0x7f9ba0b00080 dma_addr=0x36d00080
>> PMD: ixgbe_set_tx_function(): Using simple tx code path
>> PMD: ixgbe_set_tx_function(): Vector tx enabled.
>> PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f9ba08cce80 hw_ring=0x7f9ba0b10080 dma_addr=0x36d10080
>> PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst size no less than 32.
>> Port 0: 68:05:CA:04:51:3A
>> Configuring Port 1 (socket 0)
>> PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f9ba08cab40 hw_ring=0x7f9ba0b20100 dma_addr=0x36d20100
>> PMD: ixgbe_set_tx_function(): Using simple tx code path
>> PMD: ixgbe_set_tx_function(): Vector tx enabled.
>> PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f9ba08ca2c0 hw_ring=0x7f9ba0b30100 dma_addr=0x36d30100
>> PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst size no less than 32.
>> Port 1: 68:05:CA:04:51:38
>>
>> This tells us what RX and TX functions are going to be used for each port.
>>
>>>
>>> 2. The packet is a normal, albeit somewhat large (1239 bytes) TCP data
>>> packet (SSL certificate data, specifically).
>>> One important thing of note that I've just realised is that it's not
>>> this "packet of death" which causes the segmentation fault (i.e. has
>>> an out-of-bounds address for its data), but the packet afterwards-- no
>>> matter what packet it is.
>>>
>> Can this problem be reproduced using testpmd or any of the standard dpdk
>> example apps, by sending in the same packet sequence?
>>
>> Is there anything unusual being done in the setup of the mempool used for the
>> packet buffers?
>>
>> /Bruce
>>
>>>
>>> On Tue, Mar 24, 2015 at 3:17 PM, Bruce Richardson
>>> <bruce.richardson at intel.com> wrote:
>>> > On Tue, Mar 24, 2015 at 12:54:14PM +0200, Dor Green wrote:
>>> >> I've managed to fix it so 1.8 works, and the segmentation fault still occurs.
>>> >>
>>> >> On Tue, Mar 24, 2015 at 11:55 AM, Dor Green <dorgreen1 at gmail.com> wrote:
>>> >> > I tried 1.8, but that fails to initialize my device and fails at the pci probe:
>>> >> >     "Cause: Requested device 0000:04:00.1 cannot be used"
>>> >> > Can't even compile 2.0rc2 atm, getting:
>>> >> > "/usr/lib/gcc/x86_64-linux-gnu/4.6/include/emmintrin.h:701:1: note:
>>> >> > expected '__m128i' but argument is of type 'int'"
>>> >> > For reasons I don't understand.
>>> >> >
>>> >> > As for the example apps (in 1.7), I can run them properly but I don't
>>> >> > think any of them do the same processing as I do. Note that mine does
>>> >> > work with most packets.
>>> >> >
>>> >> >
>>> >
>>> > Couple of further questions:
>>> > 1. What config options are being used to configure the port and what is the
>>> > output printed at port initialization time? This is needed to let us track down
>>> > what specific RX path is being used inside the ixgbe driver
>>> > 2. What type of packets specifically cause problems? Is it reproducible with
>>> > one particular packet, or packet type? Are you sending in jumbo-frames?
>>> >
>>> > Regards,
>>> > /Bruce
>>> >
>>> >> > On Mon, Mar 23, 2015 at 11:24 PM, Matthew Hall <mhall at mhcomputing.net> wrote:
>>> >> >> On Mon, Mar 23, 2015 at 05:19:00PM +0200, Dor Green wrote:
>>> >> >>> I changed it to free and it still happens. Note that the segmentation fault
>>> >> >>> happens before that anyway.
>>> >> >>>
>>> >> >>> I am using 1.7.1 at the moment. I can try using a newer version.
>>> >> >>
>>> >> >> I'm using 1.7.X in my open-source DPDK-based app and it works, but I have an
>>> >> >> IGB 1-gigabit NIC though, and how RX / TX work are quite driver specific of
>>> >> >> course.
>>> >> >>
>>> >> >> I suspect there's some issue with how things are working in your IXGBE NIC
>>> >> >> driver / setup. Do the same failures occur inside of the DPDK's own sample
>>> >> >> apps?
>>> >> >>
>>> >> >> Matthew.


More information about the dev mailing list