[PATCH v7 18/21] net/cpfl: add HW statistics
Ferruh Yigit
ferruh.yigit at amd.com
Tue Feb 28 13:04:53 CET 2023
On 2/28/2023 11:47 AM, Liu, Mingxia wrote:
Comment moved down, please don't top post, it makes very hard to follow
discussion.
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit at amd.com>
>> Sent: Tuesday, February 28, 2023 6:02 PM
>> To: Liu, Mingxia <mingxia.liu at intel.com>; dev at dpdk.org; Xing, Beilei
>> <beilei.xing at intel.com>; Zhang, Yuying <yuying.zhang at intel.com>
>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
>>
>> On 2/28/2023 6:46 AM, Liu, Mingxia wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: Ferruh Yigit <ferruh.yigit at amd.com>
>>>> Sent: Tuesday, February 28, 2023 5:52 AM
>>>> To: Liu, Mingxia <mingxia.liu at intel.com>; dev at dpdk.org; Xing, Beilei
>>>> <beilei.xing at intel.com>; Zhang, Yuying <yuying.zhang at intel.com>
>>>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
>>>>
>>>> On 2/16/2023 12:30 AM, Mingxia Liu wrote:
>>>>> This patch add hardware packets/bytes statistics.
>>>>>
>>>>> Signed-off-by: Mingxia Liu <mingxia.liu at intel.com>
>>>>
>>>> <...>
>>>>
>>>>> +static int
>>>>> +cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats
>>>>> +*stats) {
>>>>> + struct idpf_vport *vport =
>>>>> + (struct idpf_vport *)dev->data->dev_private;
>>>>> + struct virtchnl2_vport_stats *pstats = NULL;
>>>>> + int ret;
>>>>> +
>>>>> + ret = idpf_vc_stats_query(vport, &pstats);
>>>>> + if (ret == 0) {
>>>>> + uint8_t crc_stats_len = (dev->data-
>>>>> dev_conf.rxmode.offloads &
>>>>> + RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
>>>> 0 :
>>>>> + RTE_ETHER_CRC_LEN;
>>>>> +
>>>>> + idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
>>>>> + stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
>>>>> + pstats->rx_broadcast - pstats->rx_discards;
>>>>> + stats->opackets = pstats->tx_broadcast + pstats->tx_multicast
>>>> +
>>>>> + pstats->tx_unicast;
>>>>> + stats->imissed = pstats->rx_discards;
>>>>> + stats->oerrors = pstats->tx_errors + pstats->tx_discards;
>>>>> + stats->ibytes = pstats->rx_bytes;
>>>>> + stats->ibytes -= stats->ipackets * crc_stats_len;
>>>>> + stats->obytes = pstats->tx_bytes;
>>>>> +
>>>>> + dev->data->rx_mbuf_alloc_failed =
>>>>> +cpfl_get_mbuf_alloc_failed_stats(dev);
>>>>
>>>> 'dev->data->rx_mbuf_alloc_failed' is also used by telemetry, updating
>>>> here only in stats_get() will make it wrong for telemetry.
>>>>
>>>> Is it possible to update 'dev->data->rx_mbuf_alloc_failed' whenever
>>>> alloc failed? (alongside 'rxq->rx_stats.mbuf_alloc_failed').
>>> [Liu, Mingxia] As I know, rte_eth_dev_data is not a public structure provided
>> to user, user need to access through rte_ethdev APIs.
>>> Because we already put rx and tx burst func to common/idpf which has no
>> dependcy with ethdev lib. If I update "dev->data->rx_mbuf_alloc_failed"
>>> when allocate mbuf fails, it will break the design of our common/idpf
>> interface to net/cpfl or net.idpf.
>>>
>>> And I didn't find any reference of 'dev->data->rx_mbuf_alloc_failed' in lib
>> code.
>>>
>>
>> Please check 'eth_dev_handle_port_info()' function.
>> As I said this is used by telemetry, not directly exposed to the user.
>>
>> I got the design concern, perhaps you can put a brief limitation to the driver
>> documentation.
>>
> OK, got it.
>
> As our previous design did have flaws.
> And if we don't want to affect correctness of telemetry, we have to redesign the idpf common module code,
> which means a lot of work to do, so can we lower the priority of this issue?
>
I don't believe this is urgent, can you but a one line limitation to the
documentation for now, and fix it later?
And for the fix, updating 'dev->data->rx_mbuf_alloc_failed' where ever
'rxq->rx_stats.mbuf_alloc_failed' updated is easy, although you may need
to store 'dev->data' in rxq struct for this.
But,
I think it is also fair to question the assumption telemetry has that
'rx_mbuf_alloc_fail' is always available data, and consider moving it to
the 'eth_dev_handle_port_stats()' handler.
+Bruce for comment.
More information about the dev
mailing list