[dpdk-stable] [PATCH] net/enic: fix TSO for packets greater than 9208 bytes

Luca Boccassi bluca at debian.org
Fri Nov 3 11:00:15 CET 2017


On Wed, 2017-11-01 at 18:56 -0700, John Daley wrote:
> A check was previously added to drop Tx packets greater than what the
> Nic
> is capable of sending since such packets can freeze the send queue.
> The
> check did not account for TSO packets however, so TSO was limited to
> 9208
> bytes.
> 
> Check packet length only for non-TSO packets. Also insure that TSO
> packet
> segment size plus the headers do not exceed what the Nic is capable
> of
> since this also can freeze the send queue.
> 
> Use the PKT_TX_TCP_SEG ol_flag instead of m->tso_segsz which is the
> preferred way to check for TSO.
> 
> Fixes: ed6e564c214e ("net/enic: fix memory leak with oversized Tx
> packets")
> Cc: stable at dpdk.org
> 
> Signed-off-by: John Daley <johndale at cisco.com>
> ---
> 
> Note that there is some more work to do on enic TSO- the header
> length is
> calculated by looking at the packet instead of just trusting mbuf tso
> offload header lengths. The 'tx_oversized' stat is used for more than
> just
> oversized packets- it gets rolled into 'oerrors' so doesn't matter
> but the
> name should be changed. Some TSO tunneling support can be added for
> newer
> hardware. These changes will come in the next relase, but hope that
> this
> patch can be accepted in 17.11 because it solves existing customer
> problem.
> 
>  drivers/net/enic/enic_rxtx.c | 25 +++++++++++++++++++------
>  1 file changed, 19 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/net/enic/enic_rxtx.c
> b/drivers/net/enic/enic_rxtx.c
> index a39172f14..e938193b5 100644
> --- a/drivers/net/enic/enic_rxtx.c
> +++ b/drivers/net/enic/enic_rxtx.c
> @@ -546,12 +546,15 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct
> rte_mbuf **tx_pkts,
>  	uint64_t bus_addr;
>  	uint8_t offload_mode;
>  	uint16_t header_len;
> +	uint64_t tso;
> +	rte_atomic64_t *tx_oversized;
>  
>  	enic_cleanup_wq(enic, wq);
>  	wq_desc_avail = vnic_wq_desc_avail(wq);
>  	head_idx = wq->head_idx;
>  	desc_count = wq->ring.desc_count;
>  	ol_flags_mask = PKT_TX_VLAN_PKT | PKT_TX_IP_CKSUM |
> PKT_TX_L4_MASK;
> +	tx_oversized = &enic->soft_stats.tx_oversized;
>  
>  	nb_pkts = RTE_MIN(nb_pkts, ENIC_TX_XMIT_MAX);
>  
> @@ -561,10 +564,12 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct
> rte_mbuf **tx_pkts,
>  		data_len = tx_pkt->data_len;
>  		ol_flags = tx_pkt->ol_flags;
>  		nb_segs = tx_pkt->nb_segs;
> +		tso = ol_flags & PKT_TX_TCP_SEG;
>  
> -		if (pkt_len > ENIC_TX_MAX_PKT_SIZE) {
> +		/* drop packet if it's too big to send */
> +		if (unlikely(!tso && (pkt_len >
> ENIC_TX_MAX_PKT_SIZE))) {
>  			rte_pktmbuf_free(tx_pkt);
> -			rte_atomic64_inc(&enic-
> >soft_stats.tx_oversized);
> +			rte_atomic64_inc(tx_oversized);
>  			continue;
>  		}
>  
> @@ -587,13 +592,21 @@ uint16_t enic_xmit_pkts(void *tx_queue, struct
> rte_mbuf **tx_pkts,
>  		offload_mode = WQ_ENET_OFFLOAD_MODE_CSUM;
>  		header_len = 0;
>  
> -		if (tx_pkt->tso_segsz) {
> +		if (tso) {
>  			header_len = tso_header_len(tx_pkt);
> -			if (header_len) {
> -				offload_mode =
> WQ_ENET_OFFLOAD_MODE_TSO;
> -				mss = tx_pkt->tso_segsz;
> +
> +			/* Drop if non-TCP packet or TSO seg size is
> too big */
> +			if (unlikely((header_len == 0) || ((tx_pkt-
> >tso_segsz +
> +			    header_len) > ENIC_TX_MAX_PKT_SIZE))) {
> +				rte_pktmbuf_free(tx_pkt);
> +				rte_atomic64_inc(tx_oversized);
> +				continue;
>  			}
> +
> +			offload_mode = WQ_ENET_OFFLOAD_MODE_TSO;
> +			mss = tx_pkt->tso_segsz;
>  		}
> +
>  		if ((ol_flags & ol_flags_mask) && (header_len == 0))
> {
>  			if (ol_flags & PKT_TX_IP_CKSUM)
>  				mss |= ENIC_CALC_IP_CKSUM;

Hi,

Has this, or a version of this, been accepted into dpdk/master? I did a
quick search but couldn't find it.

I tried to apply it to dpdk-stable/16.11 but the context is quite
different so it doesn't apply. If you would like it for 16.11.4, after
it's accepted in dpdk/master, please send a reworked version that can
be applied to dpdk-stable/16.11.

Thanks!

-- 
Kind regards,
Luca Boccassi


More information about the stable mailing list