[dpdk-dev] [PATCH v3 00/34] A new net PMD - ice

Varghese, Vipin vipin.varghese at intel.com
Thu Dec 13 14:09:35 CET 2018


Hi Wenzhuo,

Thanks for the update, couple of suggestion in my opinion shared below

Snipped
> > [PATCH v2 02/20] net/ice: support device initialization
> > > +# Compile burst-oriented ICE PMD driver #
> > CONFIG_RTE_LIBRTE_ICE_PMD=y
> >
> > Based on ' https://patches.dpdk.org/patch/48488/' it is suggested to
> > configure. But here is it already set to 'y'. Is this correct? If yes
> > can you update ' https://patches.dpdk.org/patch/48488/'
> We've discussed the setting. I don’t know anything left. If there's, please let me
> know.
I think I poorly communicated, so let me try again. In document it is stated to configure as 'y'. But in your default config is 'y'. So either make the default as 'n' so user can configure or reword in document as 'Ensure the config is set to y for before building'

> 
> >
> >  [PATCH v2 20/20] net/ice: support meson build
> > > > > > Should not meson build option be add start. That is in patch
> > > > > > 1/20 so compile options does not fail?
> > > > > It will not fail. Enabling the compile earlier only means the
> > > > > code can be
> > > > compiled.
> > > > > But, to use this device we do need the whole patch set. From
> > > > > this point of view, compiling it at the end maybe better.
> > > > Thanks for update, so will 'meson-build' success if apply 3 patches?
> > > Sure, meson build will not be broken by any one of these patches.
> > > Only until this patch, what built by meson can support ice.
> > Thanks for confirmation that you have tried './devtools/test-meson-
> > builds.sh' and the intermediate build for ICE_DSI PMD does not fail.
> I said meson build is working. But don't know what's ICE_DSI PMD.

Once again apologies if I poorly communicated ICE_PMD, my question is 'if I take patch 1/20 in v2 can I build for librte_pmd_ice'? is not we are expecting each additional layering for functionality like mtu set, switch set should be compiling and not just the last build?

> 
> >
> >  [PATCH v2 16/20] net/ice: support basic RX/TX
> >
> >  [PATCH v2 14/20] net/ice: support statistics
> >  > > > +	ns->eth.tx_bytes -= (ns->eth.tx_unicast + ns->eth.tx_multicast
> +
> > > > > +			     ns->eth.tx_broadcast) * ETHER_CRC_LEN;
> > > >
> > > > In earlier patch for 'mtu set check' we added VSI SWITCH VLAN.
> > > > Should we add VSI VLAN here?
> > > Don't need. They're different functions. We add crc length here
> > > because of HW counts the packet length before crc is added.
> > So you are not fetching stats from HW registers from switch is this correct?
> > How will you get stats for actually transmitted in xstats? As I
> > understand xstats is for switch HW stats right?
> No. The stats is got from HW. We just correct it as HW doesn’t have the chance
> to add CRC length. But I don't understand why switch is mentioned here.
Will ice pmd  for CVL expose all the switch stats as 'xstats'? If yes, can you share in patch comments what are switch level stats?

> 
> >
> >  [PATCH v2 04/20] net/ice: support getting	device	information
> >  > > Does this mean per queue offload capability is not supported? If
> > > > yes, can you mention this in release notes under 'support or limitation'
> > > No, it's not supported. We have a document, ice.ini, to list all the
> > > features supported. All the others are not supported.
> > > BTW, I don't think anything not supported is limitation.
> > If I understand correctly,  ICE_DSI_PMD is advertising it has not
> > offload for RX and TX. But you are stating in ice.ini you are listing
> > offload supports. So let me rephrase the question 'if you support port
> > level offload capability, it will reflect for all queues rx and tx.
> > But if you reflect queue level offload as 0 for rx and tx, then APIs
> > rte_eth_rx_queue_setup and rte_eth_tx_queue_setup if queue offload
> > enabled should fail. Is this correct understanding?'
> Sorry, I don’t know what's ICE_DSI_PMD.
ICE_PMD 


> 
> >
> > > > If device switch is not configured (default value from NVM) should
> > > > we highlight the switch can support speed 10, 100, 1000, 1000 and
> > > > son
> > on?
> > > No, this's the capability getting from HW.
> > If HW is supported or configured for 10, 100, 25G then those should be
> > returned correctly this I agree. But when the device is queried for
> > capability it should highlight all supported speeds of switch. Am I right?
> No. Here shows the result not all the speeds supported. Like the speed after
> auto negotiation.
As per your current statement "If user uses API rte_eth_dev_info_get to get speed_capa, current speed will be returned as auto negotiated value and not ' ETH_LINK_SPEED_10M| ETH_LINK_SPEED_100M| ETH_LINK_SPEED_1G| ETH_LINK_SPEED_10G| ETH_LINK_SPEED_25G'". I will leave this to others to comment since in my humble opinion this is not expected.

> 
> >
> > > > If speed is not true as stated above, can you please add this to
> > > > release notes and documentation.
> > > Here listed all the case we can get from HW.
> > Please add to ice_dsi documentation also.
> Sorry, no idea about ice_dsi.
My mistake ICE_PMD is what I am referring to.

> 
> >
> > snipped



More information about the dev mailing list