[dpdk-dev] [PATCH v4 2/2] doc: add guide for debug and troubleshoot
Kovacevic, Marko
marko.kovacevic at intel.com
Fri Jan 18 16:28:44 CET 2019
After checking the patch again I found a few spelling mistakes
> Add user guide on debug and troubleshoot for common issues and
> bottleneck found in sample application model.
>
> Signed-off-by: Vipin Varghese <vipin.varghese at intel.com>
> Acked-by: Marko Kovacevic <marko.kovacevic at intel.com>
> ---
> doc/guides/howto/debug_troubleshoot_guide.rst | 375
> ++++++++++++++++++
> doc/guides/howto/index.rst | 1 +
> 2 files changed, 376 insertions(+)
> create mode 100644 doc/guides/howto/debug_troubleshoot_guide.rst
>
<...>
receieve / receive
> + - If stats for RX and drops updated on same queue? check receieve
> thread
> + - If packet does not reach PMD? check if offload for port and queue
> + matches to traffic pattern send.
> +
<...>
Offlaod/ offload
> + - Is the packet multi segmented? Check if port and queue offlaod is set.
> +
> +Are there object drops in producer point for ring?
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
<...>
sufficent / sufficient
> + - Are drops on specific socket? If yes check if there are sufficent
> + objects by rte_mempool_get_count() or rte_mempool_avail_count()
> + - Is 'rte_mempool_get_count() or rte_mempool_avail_count()' zero?
> + application requires more objects hence reconfigure number of
> + elements in rte_mempool_create().
> + - Is there single RX thread for multiple NIC? try having multiple
> + lcore to read from fixed interface or we might be hitting cache
> + limit, so increase cache_size for pool_create().
> +
Sceanrios/ scenarios
> +#. Is performance low for some sceanrios?
> + - Check if sufficient objects in mempool by rte_mempool_avail_count()
> + - Is failure seen in some packets? we might be getting packets with
> + 'size > mbuf data size'.
> + - Is NIC offload or application handling multi segment mbuf? check the
> + special packets are continuous with rte_pktmbuf_is_contiguous().
> + - If there separate user threads used to access mempool objects, use
> + rte_mempool_cache_create() for non DPDK threads.
debuging / debugging
> + - Is the error reproducible with 1GB hugepage? If no, then try debuging
> + the issue with lookup table or objects with rte_mem_lock_page().
> +
> +.. note::
> + Stall in release of MBUF can be because
<...>
softwre / software
> + - If softwre crypto is in use, check if the CRYPTO Library is build with
> + right (SIMD) flags or check if the queue pair using CPU ISA for
> + feature_flags AVX|SSE|NEON using rte_cryptodev_info_get()
Assited/ assisted
> + - If its hardware assited crypto showing performance variance? Check if
> + hardware is on same NUMA socket as queue pair and session pool.
> +
<...>
exceeeding / exceeding
> + core? registered functions may be exceeeding the desired time slots
> + while running on same service core.
> + - Is function is running on RTE core? check if there are conflicting
> + functions running on same CPU core by rte_thread_get_affinity().
> +
<...>
> +#. Where to capture packets?
> + - Enable pdump in primary to allow secondary to access queue-pair for
> + ports. Thus packets are copied over in RX|TX callback by secondary
> + process using ring buffers.
> + - To capture packet in middle of pipeline stage, user specific hooks
> + or callback are to be used to copy the packets. These packets can
secodnary / secondary
> + be shared to secodnary process via user defined custom rings.
> +
> +Issue still persists?
> +~~~~~~~~~~~~~~~~~~~~~
> +
> +#. Are there custom or vendor specific offload meta data?
> + - From PMD, then check for META data error and drops.
> + - From application, then check for META data error and drops.
> +#. Is multiprocess is used configuration and data processing?
> + - Check enabling or disabling features from secondary is supported or
> not?
Obejcts/ objects
> +#. Is there drops for certain scenario for packets or obejcts?
> + - Check user private data in objects by dumping the details for debug.
> +
<...>
Thanks,
Marko K
More information about the dev
mailing list