[PATCH v4] gro: bug fix in identifying fragmented packets

Hu, Jiayu jiayu.hu at intel.com
Sun Jun 12 07:20:29 CEST 2022


Hi Kumara,

> -----Original Message-----
> From: Kumara Parameshwaran <kumaraparamesh92 at gmail.com>
> Sent: Wednesday, June 8, 2022 5:57 PM
> To: Hu, Jiayu <jiayu.hu at intel.com>
> Cc: dev at dpdk.org; Kumara Parameshwaran
> <kumaraparamesh92 at gmail.com>; stable at dpdk.org
> Subject: [PATCH v4] gro: bug fix in identifying fragmented packets
> 
> From: Kumara Parameshwaran <kumaraparamesh92 at gmail.com>
> 
> A packet with RTE_PTYPE_L4_FRAG(0x300) contains both RTE_PTYPE_L4_TCP
> (0x100) & RTE_PTYPE_L4_UDP (0x200). A fragmented packet as defined in
> rte_mbuf_ptype.h cannot be recognized as other L4 types and hence the
> GRO layer should not use IS_IPV4_TCP_PKT or IS_IPV4_UDP_PKT for
> RTE_PTYPE_L4_FRAG. Hence, if the packet type is RTE_PTYPE_L4_FRAG the ip

A simpler way is to add a "((ptype & RTE_PTYPE_L4_FRAG) != RTE_PTYPE_L4_FRAG))"
in IS_IPV4_VXLAN_TCP4_PKT and IS_IPV4_TCP_PKT to avoid processing IP fragments
in TCP based GRO functions. For example:
#define IS_IPV4_TCP_PKT(ptype) (RTE_ETH_IS_IPV4_HDR(ptype) && \
                ((ptype & RTE_PTYPE_L4_TCP) == RTE_PTYPE_L4_TCP) && \
	((ptype & RTE_PTYPE_L4_FRAG) != RTE_PTYPE_L4_FRAG) && \
                (RTE_ETH_IS_TUNNEL_PKT(ptype) == 0))

Thanks,
Jiayu

> header should be parsed to recognize the appropriate IP type and invoke the
> respective gro handler.
> 
> Fixes: 1ca5e6740852 ("gro: support UDP/IPv4")
> Cc: stable at dpdk.org
> 
> Signed-off-by: Kumara Parameshwaran <kumaraparamesh92 at gmail.com>
> ---
> v1:
> * Introduce IS_IPV4_FRAGMENT macro to check if fragmented packet and
>   if true extract the IP header to identify the protocol type and
>   invoke the appropriate gro handler. This is done for both
>   rte_gro_reassemble and rte_gro_reassemble_burst APIs.
> v2,v3,v4:
> * Fix extra whitespace and column limit warnings
> 
>  lib/gro/rte_gro.c | 43 +++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 41 insertions(+), 2 deletions(-)  lib/gro/rte_gro.c | 43
> +++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 41 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/gro/rte_gro.c b/lib/gro/rte_gro.c index
> 6f7dd4d709..83d6e21dbb 100644
> --- a/lib/gro/rte_gro.c
> +++ b/lib/gro/rte_gro.c
> @@ -38,6 +38,9 @@ static gro_tbl_pkt_count_fn
> tbl_pkt_count_fn[RTE_GRO_TYPE_MAX_NUM] = {
>  		((ptype & RTE_PTYPE_L4_UDP) == RTE_PTYPE_L4_UDP) && \
>  		(RTE_ETH_IS_TUNNEL_PKT(ptype) == 0))
> 
> +#define IS_IPV4_FRAGMENT(ptype) (RTE_ETH_IS_IPV4_HDR(ptype) && \
> +		((ptype & RTE_PTYPE_L4_FRAG) == RTE_PTYPE_L4_FRAG))
> +
>  #define IS_IPV4_VXLAN_TCP4_PKT(ptype) (RTE_ETH_IS_IPV4_HDR(ptype)
> && \
>  		((ptype & RTE_PTYPE_L4_UDP) == RTE_PTYPE_L4_UDP) && \
>  		((ptype & RTE_PTYPE_TUNNEL_VXLAN) == \ @@ -240,7
> +243,28 @@ rte_gro_reassemble_burst(struct rte_mbuf **pkts,
>  		 * The timestamp is ignored, since all packets
>  		 * will be flushed from the tables.
>  		 */
> -		if (IS_IPV4_VXLAN_TCP4_PKT(pkts[i]->packet_type) &&
> +		if (IS_IPV4_FRAGMENT(pkts[i]->packet_type)) {
> +			struct rte_ipv4_hdr ip4h_copy;
> +			const struct rte_ipv4_hdr *ip4h =
> rte_pktmbuf_read(pkts[i], pkts[i]->l2_len,
> +
> 					sizeof(*ip4h), &ip4h_copy);
> +			if (ip4h->next_proto_id == IPPROTO_UDP &&
> do_udp4_gro) {
> +				ret = gro_udp4_reassemble(pkts[i],
> +							&udp_tbl, 0);
> +				if (ret > 0)
> +					nb_after_gro--;
> +				else if (ret < 0)
> +					unprocess_pkts[unprocess_num++] =
> pkts[i];
> +			} else if (ip4h->next_proto_id == IPPROTO_TCP &&
> do_tcp4_gro) {
> +				ret = gro_tcp4_reassemble(pkts[i],
> +						&tcp_tbl, 0);
> +				if (ret > 0)
> +					nb_after_gro--;
> +				else if (ret < 0)
> +					unprocess_pkts[unprocess_num++] =
> pkts[i];
> +			} else {
> +				unprocess_pkts[unprocess_num++] = pkts[i];
> +			}
> +		} else if (IS_IPV4_VXLAN_TCP4_PKT(pkts[i]->packet_type) &&
>  				do_vxlan_tcp_gro) {
>  			ret = gro_vxlan_tcp4_reassemble(pkts[i],
>  							&vxlan_tcp_tbl, 0);
> @@ -349,7 +373,22 @@ rte_gro_reassemble(struct rte_mbuf **pkts,
>  	current_time = rte_rdtsc();
> 
>  	for (i = 0; i < nb_pkts; i++) {
> -		if (IS_IPV4_VXLAN_TCP4_PKT(pkts[i]->packet_type) &&
> +		if (IS_IPV4_FRAGMENT(pkts[i]->packet_type)) {
> +			struct rte_ipv4_hdr ip4h_copy;
> +			const struct rte_ipv4_hdr *ip4h =
> rte_pktmbuf_read(pkts[i], pkts[i]->l2_len,
> +
> 					sizeof(*ip4h), &ip4h_copy);
> +			if (ip4h->next_proto_id == IPPROTO_UDP &&
> do_udp4_gro) {
> +				if (gro_udp4_reassemble(pkts[i], udp_tbl,
> +						current_time) < 0)
> +					unprocess_pkts[unprocess_num++] =
> pkts[i];
> +			} else if (ip4h->next_proto_id == IPPROTO_TCP &&
> do_tcp4_gro) {
> +				if (gro_tcp4_reassemble(pkts[i], tcp_tbl,
> +						current_time) < 0)
> +					unprocess_pkts[unprocess_num++] =
> pkts[i];
> +			} else {
> +				unprocess_pkts[unprocess_num++] = pkts[i];
> +			}
> +		} else if (IS_IPV4_VXLAN_TCP4_PKT(pkts[i]->packet_type) &&
>  				do_vxlan_tcp_gro) {
>  			if (gro_vxlan_tcp4_reassemble(pkts[i], vxlan_tcp_tbl,
>  						current_time) < 0)
> --
> 2.25.1



More information about the stable mailing list