[dpdk-stable] [PATCH v2 4/4] net/ice: support switch flow for specific L4 type

Zhao1, Wei wei.zhao1 at intel.com
Tue Jun 23 03:12:21 CEST 2020



> -----Original Message-----
> From: Zhang, Qi Z <qi.z.zhang at intel.com>
> Sent: Monday, June 22, 2020 11:36 PM
> To: Zhao1, Wei <wei.zhao1 at intel.com>; dev at dpdk.org
> Cc: stable at dpdk.org
> Subject: RE: [PATCH v2 4/4] net/ice: support switch flow for specific L4 type
> 
> 
> 
> > -----Original Message-----
> > From: Zhao1, Wei <wei.zhao1 at intel.com>
> > Sent: Wednesday, June 17, 2020 2:14 PM
> > To: dev at dpdk.org
> > Cc: stable at dpdk.org; Zhang, Qi Z <qi.z.zhang at intel.com>; Zhao1, Wei
> > <wei.zhao1 at intel.com>
> > Subject: [PATCH v2 4/4] net/ice: support switch flow for specific L4
> > type
> >
> > This patch add more specific tunnel type for ipv4/ipv6 packet, it
> > enable tcp/udp layer of ipv4/ipv6 as L4 payload but without
> > L4 dst/src port number as input set for the switch filter rule.
> >
> > Fixes: 47d460d63233 ("net/ice: rework switch filter")
> > Cc: stable at dpdk.org
> >
> > Signed-off-by: Wei Zhao <wei.zhao1 at intel.com>
> > ---
> >  drivers/net/ice/ice_switch_filter.c | 23 +++++++++++++++++------
> >  1 file changed, 17 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/net/ice/ice_switch_filter.c
> > b/drivers/net/ice/ice_switch_filter.c
> > index 3b38195d6..f4fd8ff33 100644
> > --- a/drivers/net/ice/ice_switch_filter.c
> > +++ b/drivers/net/ice/ice_switch_filter.c
> > @@ -471,11 +471,11 @@ ice_switch_inset_get(const struct rte_flow_item
> > pattern[],
> >  	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
> >  	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
> >  	uint64_t input_set = ICE_INSET_NONE;
> > +	uint16_t tunnel_valid = 0;
> 
> why not vxlan_valid and nvgre_valid to keep consistent naming with other
> variables?
> Can we use a bitmap

Ok, Update in v3

> 
> >  	bool pppoe_elem_valid = 0;
> >  	bool pppoe_patt_valid = 0;
> >  	bool pppoe_prot_valid = 0;
> >  	bool profile_rule = 0;
> > -	bool tunnel_valid = 0;
> >  	bool ipv6_valiad = 0;
> >  	bool ipv4_valiad = 0;
> >  	bool udp_valiad = 0;
> > @@ -960,7 +960,7 @@ ice_switch_inset_get(const struct rte_flow_item
> > pattern[],
> >  					   "Invalid NVGRE item");
> >  				return 0;
> >  			}
> > -			tunnel_valid = 1;
> > +			tunnel_valid = 2;
> >  			if (nvgre_spec && nvgre_mask) {
> >  				list[t].type = ICE_NVGRE;
> >  				if (nvgre_mask->tni[0] ||
> > @@ -1325,6 +1325,21 @@ ice_switch_inset_get(const struct rte_flow_item
> > pattern[],
> >  			*tun_type = ICE_SW_TUN_PPPOE;
> >  	}
> >
> > +	if (!pppoe_patt_valid) {
> > +		if (tunnel_valid == 1)
> > +			*tun_type = ICE_SW_TUN_VXLAN;
> > +		else if (tunnel_valid == 2)
> > +			*tun_type = ICE_SW_TUN_NVGRE;
> > +		else if (ipv4_valiad && tcp_valiad)
> > +			*tun_type = ICE_SW_IPV4_TCP;
> > +		else if (ipv4_valiad && udp_valiad)
> > +			*tun_type = ICE_SW_IPV4_UDP;
> > +		else if (ipv6_valiad && tcp_valiad)
> > +			*tun_type = ICE_SW_IPV6_TCP;
> > +		else if (ipv6_valiad && udp_valiad)
> > +			*tun_type = ICE_SW_IPV6_UDP;
> > +	}
> > +
> >  	*lkups_num = t;
> >
> >  	return input_set;
> > @@ -1536,10 +1551,6 @@ ice_switch_parse_pattern_action(struct
> > ice_adapter *ad,
> >
> >  	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
> >  		item_num++;
> > -		if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN)
> > -			tun_type = ICE_SW_TUN_VXLAN;
> > -		if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE)
> > -			tun_type = ICE_SW_TUN_NVGRE;
> >  		if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
> >  			const struct rte_flow_item_eth *eth_mask;
> >  			if (item->mask)
> > --
> > 2.19.1
> 



More information about the stable mailing list