[PATCH v2 1/2] net/ice: fix Tx offload path choice

Zhang, Qi Z qi.z.zhang at intel.com
Fri Mar 4 04:31:56 CET 2022



> -----Original Message-----
> From: Xu, Ting <ting.xu at intel.com>
> Sent: Friday, March 4, 2022 11:19 AM
> To: Liu, KevinX <kevinx.liu at intel.com>; dev at dpdk.org
> Cc: Yang, Qiming <qiming.yang at intel.com>; Zhang, Qi Z
> <qi.z.zhang at intel.com>; Yang, SteveX <stevex.yang at intel.com>; Yigit, Ferruh
> <ferruh.yigit at intel.com>; Liu, KevinX <kevinx.liu at intel.com>; stable at dpdk.org
> Subject: RE: [PATCH v2 1/2] net/ice: fix Tx offload path choice
> 
> > -----Original Message-----
> > From: Kevin Liu <kevinx.liu at intel.com>
> > Sent: Wednesday, December 29, 2021 5:37 PM
> > To: dev at dpdk.org
> > Cc: Yang, Qiming <qiming.yang at intel.com>; Zhang, Qi Z
> > <qi.z.zhang at intel.com>; Yang, SteveX <stevex.yang at intel.com>; Yigit,
> > Ferruh <ferruh.yigit at intel.com>; Liu, KevinX <kevinx.liu at intel.com>;
> > stable at dpdk.org
> > Subject: [PATCH v2 1/2] net/ice: fix Tx offload path choice
> >
> > Testpmd forwards packets in checksum mode that it needs to calculate
> > the checksum of each layer's protocol.
> >
> > When setting the hardware calculates the outer UDP checksum and the
> > software calculates the outer IP checksum, the dev->tx_pkt_burst in
> > ice_set_tx_function is set to ice_xmit_pkts_vec_avx2.
> > The inner and outer UDP checksum of the tunnel packet after forwarding
> > is wrong.The dev->tx_pkt_burst should be set to ice_xmit_pkts.
> >
> > The patch adds RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM to
> > ICE_TX_NO_VECTOR_FLAGS,set dev->tx_pkt_burst to ice_xmit_pkts.After
> > the tunnel packet is forwarded, the inner and outer UDP checksum is correct.
> >
> > At the same time, the patch of "net/ice: fix Tx Checksum offload" will
> > cause interrupt errors in a special case that only inner IP and inner
> > UDP checksum are set for hardware calculation.The patch is updating
> > ICE_TX_NO_VECTOR_FLAGS, the problem can be solved, so I will restore
> > the code modification of that patch.
> >
> > Fixes: 28f9002ab67f ("net/ice: add Tx AVX512 offload path")
> > Fixes: 295968d17407 ("ethdev: add namespace")
> > Fixes: 17c7d0f9d6a4 ("net/ice: support basic Rx/Tx")
> > Cc: stable at dpdk.org
> >
> > Signed-off-by: Kevin Liu <kevinx.liu at intel.com>
> > ---
> >  drivers/net/ice/ice_rxtx.c            | 41 ++++++-------------
> >  drivers/net/ice/ice_rxtx_vec_common.h | 59
> > +++++++++------------------
> >  2 files changed, 31 insertions(+), 69 deletions(-)
> >
> > diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
> > index 4f218bcd0d..041f4bc91f 100644
> > --- a/drivers/net/ice/ice_rxtx.c
> > +++ b/drivers/net/ice/ice_rxtx.c
> > @@ -2501,35 +2501,18 @@ ice_txd_enable_checksum(uint64_t ol_flags,
> >  			<< ICE_TX_DESC_LEN_MACLEN_S;
> >
> >  	/* Enable L3 checksum offloads */
> > -	/*Tunnel package usage outer len enable L3 checksum offload*/
> > -	if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
> > -		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
> > -			*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
> > -			*td_offset |= (tx_offload.outer_l3_len >> 2) <<
> > -				ICE_TX_DESC_LEN_IPLEN_S;
> > -		} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
> > -			*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
> > -			*td_offset |= (tx_offload.outer_l3_len >> 2) <<
> > -				ICE_TX_DESC_LEN_IPLEN_S;
> > -		} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
> > -			*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
> > -			*td_offset |= (tx_offload.outer_l3_len >> 2) <<
> > -				ICE_TX_DESC_LEN_IPLEN_S;
> > -		}
> > -	} else {
> > -		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
> > -			*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
> > -			*td_offset |= (tx_offload.l3_len >> 2) <<
> > -				ICE_TX_DESC_LEN_IPLEN_S;
> > -		} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
> > -			*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
> > -			*td_offset |= (tx_offload.l3_len >> 2) <<
> > -				ICE_TX_DESC_LEN_IPLEN_S;
> > -		} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
> > -			*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
> > -			*td_offset |= (tx_offload.l3_len >> 2) <<
> > -				ICE_TX_DESC_LEN_IPLEN_S;
> > -		}
> > +	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
> > +		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
> > +		*td_offset |= (tx_offload.l3_len >> 2) <<
> > +			ICE_TX_DESC_LEN_IPLEN_S;
> > +	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
> > +		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
> > +		*td_offset |= (tx_offload.l3_len >> 2) <<
> > +			ICE_TX_DESC_LEN_IPLEN_S;
> > +	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
> > +		*td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
> > +		*td_offset |= (tx_offload.l3_len >> 2) <<
> > +			ICE_TX_DESC_LEN_IPLEN_S;
> >  	}
> >
> >  	if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) { diff --git
> > a/drivers/net/ice/ice_rxtx_vec_common.h
> > b/drivers/net/ice/ice_rxtx_vec_common.h
> > index 8ff01046e1..2dd2d83650 100644
> > --- a/drivers/net/ice/ice_rxtx_vec_common.h
> > +++ b/drivers/net/ice/ice_rxtx_vec_common.h
> > @@ -250,7 +250,8 @@ ice_rxq_vec_setup_default(struct ice_rx_queue *rxq)
> >  #define ICE_TX_NO_VECTOR_FLAGS (			\
> >  		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |		\
> >  		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |	\
> > -		RTE_ETH_TX_OFFLOAD_TCP_TSO)
> > +		RTE_ETH_TX_OFFLOAD_TCP_TSO |	\
> > +		RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
> >
> >  #define ICE_TX_VECTOR_OFFLOAD (				\
> >  		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |		\
> > @@ -364,45 +365,23 @@ ice_txd_enable_offload(struct rte_mbuf *tx_pkt,
> >  	uint32_t td_offset = 0;
> >
> >  	/* Tx Checksum Offload */
> > -	/*Tunnel package usage outer len enable L2/L3 checksum offload*/
> > -	if (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
> > -		/* SET MACLEN */
> > -		td_offset |= (tx_pkt->outer_l2_len >> 1) <<
> > -			ICE_TX_DESC_LEN_MACLEN_S;
> > -
> > -		/* Enable L3 checksum offload */
> > -		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
> > -			td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
> > -			td_offset |= (tx_pkt->outer_l3_len >> 2) <<
> > -				ICE_TX_DESC_LEN_IPLEN_S;
> > -		} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
> > -			td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
> > -			td_offset |= (tx_pkt->outer_l3_len >> 2) <<
> > -				ICE_TX_DESC_LEN_IPLEN_S;
> > -		} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
> > -			td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
> > -			td_offset |= (tx_pkt->outer_l3_len >> 2) <<
> > -				ICE_TX_DESC_LEN_IPLEN_S;
> > -		}
> > -	} else {
> > -		/* SET MACLEN */
> > -		td_offset |= (tx_pkt->l2_len >> 1) <<
> > -			ICE_TX_DESC_LEN_MACLEN_S;
> > -
> > -		/* Enable L3 checksum offload */
> > -		if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
> > -			td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
> > -			td_offset |= (tx_pkt->l3_len >> 2) <<
> > -				ICE_TX_DESC_LEN_IPLEN_S;
> > -		} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
> > -			td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
> > -			td_offset |= (tx_pkt->l3_len >> 2) <<
> > -				ICE_TX_DESC_LEN_IPLEN_S;
> > -		} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
> > -			td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
> > -			td_offset |= (tx_pkt->l3_len >> 2) <<
> > -				ICE_TX_DESC_LEN_IPLEN_S;
> > -		}
> > +	/* SET MACLEN */
> > +	td_offset |= (tx_pkt->l2_len >> 1) <<
> > +		ICE_TX_DESC_LEN_MACLEN_S;
> > +
> > +	/* Enable L3 checksum offload */
> > +	if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM) {
> > +		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4_CSUM;
> > +		td_offset |= (tx_pkt->l3_len >> 2) <<
> > +			ICE_TX_DESC_LEN_IPLEN_S;
> > +	} else if (ol_flags & RTE_MBUF_F_TX_IPV4) {
> > +		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV4;
> > +		td_offset |= (tx_pkt->l3_len >> 2) <<
> > +			ICE_TX_DESC_LEN_IPLEN_S;
> > +	} else if (ol_flags & RTE_MBUF_F_TX_IPV6) {
> > +		td_cmd |= ICE_TX_DESC_CMD_IIPT_IPV6;
> > +		td_offset |= (tx_pkt->l3_len >> 2) <<
> > +			ICE_TX_DESC_LEN_IPLEN_S;
> >  	}
> >
> >  	/* Enable L4 checksum offloads */
> > --
> > 2.33.1
> 
> Acked-by: Ting Xu <ting.xu at intel.com>

Applied to dpdk-next-net-intel.

Thanks
Qi



More information about the stable mailing list