[dpdk-dev] questions about new offload ethdev api

Shahaf Shuler shahafs at mellanox.com
Tue Jan 23 15:34:11 CET 2018


Tuesday, January 23, 2018 3:53 PM, Olivier Matz:
> Hi,
> 
> I'm currently porting an application to the new ethdev offload api, and I have
> two questions about it.
> 
> 1/ inconsistent offload capa for PMDs using the old API?
> 
> It is stated in struct rte_eth_txmode:
> 
>    /**
>     * Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
>     * Only offloads set on tx_offload_capa field on rte_eth_dev_info
>     * structure are allowed to be set.
>     */
>     uint64_t offloads;
> 
> So, if I want to enable DEV_TX_OFFLOAD_MULTI_SEGS for the whole
> ethdev, I must check that DEV_TX_OFFLOAD_MULTI_SEGS is advertised in
> dev_info->tx_offload_capa.
> 
> In my understanding, many PMDs are still using the old API, and there is a
> conversion layer in ethdev when doing dev_configure(). But I don't see any
> similar mechanism for the dev_info. Therefore,
> DEV_TX_OFFLOAD_MULTI_SEGS is not present in tx_offload_capa, and if I
> follow the API comment, I'm not allowed to use this feature.
> 
> Am I missing something or is it a bug?

Yes this is something we missed during the review. 

The DEV_TX_OFFLOAD_MULTI_SEGS is a new capability, to match the old ETH_TXQ_FLAGS_NOMULTSEGS, I guess that we have the same issue with DEV_TX_OFFLOAD_MBUF_FAST_FREE.

Am not sure it can be easily solved with conversion function in ethdev layer. since both of the capabilities are new, and ethdev cannot possibly know which PMD supports them.
One option is to set the cap for every PMD, like it was assumed before the new offloads API. not sure it is the right way though.
The DEV_TX_OFFLOAD_MBUF_FAST_FREE could be done by converting the default_txconf into capability flags. So in case the (ETH_TXQ_FLAGS_NOREFCOUNT | ETH_TXQ_FLAGS_NOMULTMEMP) are set ethdev layer will set the fast free flag.

However, I think the right fix is for the PMDs which indeed support it to provide a patch which set it in the tx_offloads_capa even if they don't want to do the full conversion yet (I think it is very small work). Specifically considering we expect the majority of the PMD to move to the new API in 18.05.

I tried to make everything work for both old/new PMD/application however still there are some corner cases. 

> 
> It looks that testpmd does not check the capa before setting the an offload
> flag. This could be a workaround in my application.
> 
> 2/ meaning of rxmode.jumbo_frame, rxmode.enable_scatter,
> rxmode.max_rx_pkt_len
> 
> While it's not related to the new API, it is probably a good opportunity to
> clarify the meaning of these flags. I'm not able to find a good documentation
> about them.
> 
> Here is my understanding, the configuration only depends on:
> - the maximum rx frame length
> - the amount of data available in a mbuf (minus headroom)
> 
> Flags to set in rxmode (example):
> +---------------+----------------+----------------+-----------------+
> |               |mbuf_data_len=1K|mbuf_data_len=2K|mbuf_data_len=16K|
> +---------------+----------------+----------------+-----------------+
> |max_rx_len=1500|enable_scatter  |                |                 |
> +---------------+----------------+----------------+-----------------+
> |max_rx_len=9000|enable_scatter, |enable_scatter, |jumbo_frame      |
> |               |jumbo_frame     |jumbo_frame     |                 |
> +---------------+----------------+----------------+-----------------+
> 
> If this table is correct, the flag jumbo_frame would be equivalent to check if
> max_rx_pkt_len is above a threshold.
> 
> And enable_scatter could be deduced from the mbuf size of the given rxq
> (which is a bit harder but maybe doable).

I glad you raised this subject. We had a lot of discussion on it internally in Mellanox.

I fully agree.
All application needs is to specify the maximum packet size it wants to receive. 

I think also the lack of documentation is causing PMDs to use those flags wrongly. For example - some PMDs set the jumbo_frame flag internally without it being set by the application. 

I would like to add one more item : MTU.
What is the relation (if any) between setting MTU and the max_rx_len ? 
I know MTU stands for Max Transmit Unit, however at least in Linux it is the same for the Send and the receive. 

> 
> Thanks,
> Olivier


More information about the dev mailing list