[dpdk-stable] [PATCH] app/testpmd: fix random number of Tx segments
Zhang, AlvinX
alvinx.zhang at intel.com
Mon Sep 6 12:03:35 CEST 2021
> -----Original Message-----
> From: Li, Xiaoyun <xiaoyun.li at intel.com>
> Sent: Monday, September 6, 2021 4:59 PM
> To: Zhang, AlvinX <alvinx.zhang at intel.com>; Ananyev, Konstantin
> <konstantin.ananyev at intel.com>
> Cc: dev at dpdk.org; stable at dpdk.org
> Subject: RE: [PATCH] app/testpmd: fix random number of Tx segments
>
> Hi
>
> > -----Original Message-----
> > From: Zhang, AlvinX <alvinx.zhang at intel.com>
> > Sent: Thursday, September 2, 2021 16:20
> > To: Li, Xiaoyun <xiaoyun.li at intel.com>; Ananyev, Konstantin
> > <konstantin.ananyev at intel.com>
> > Cc: dev at dpdk.org; Zhang, AlvinX <alvinx.zhang at intel.com>;
> > stable at dpdk.org
> > Subject: [PATCH] app/testpmd: fix random number of Tx segments
> >
> > When random number of segments in Tx packets is enabled, the total
> > data space length of all segments must be greater or equal than the
> > size of an Eth/IP/UDP/timestamp packet, that's total 14 + 20 + 8 + 16
> > bytes. Otherwise the Tx engine may cause the application to crash.
> >
> > Bugzilla ID: 797
> > Fixes: 79bec05b32b7 ("app/testpmd: add ability to split outgoing
> > packets")
> > Cc: stable at dpdk.org
> >
> > Signed-off-by: Alvin Zhang <alvinx.zhang at intel.com>
> > ---
> > app/test-pmd/config.c | 16 +++++++++++----- app/test-pmd/testpmd.c
> > | 5
> > +++++ app/test-pmd/testpmd.h | 5 +++++ app/test-pmd/txonly.c | 7
> > +++++ +++++--
> > 4 files changed, 26 insertions(+), 7 deletions(-)
> >
> > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index
> > 31d8ba1..5105b3b 100644
> > --- a/app/test-pmd/config.c
> > +++ b/app/test-pmd/config.c
> > @@ -3837,10 +3837,11 @@ struct igb_ring_desc_16_bytes {
> > * Check that each segment length is greater or equal than
> > * the mbuf data size.
> > * Check also that the total packet length is greater or equal than the
> > - * size of an empty UDP/IP packet (sizeof(struct rte_ether_hdr) +
> > - * 20 + 8).
> > + * size of an Eth/IP/UDP + timestamp packet
> > + * (sizeof(struct rte_ether_hdr) + 20 + 8 + 16).
>
> I don't really agree on this. Most of the time, txonly generate packets with
> Eth/IP/UDP. It's not fair to limit the hdr length to include timestamp in all cases.
> And to be honest, I don't see why you need to add "tx_pkt_nb_min_segs". It's
> only used in txonly when "TX_PKT_SPLIT_RND". So this issue is because when
> "TX_PKT_SPLIT_RND", the random nb_segs is not enough for the hdr.
>
> But if you read txonly carefully, if "TX_PKT_SPLIT_RND", the first segment length
> should be equal or greater than 42 (14+20+8). Because when
> "TX_PKT_SPLIT_RND", update_pkt_header() should be called. And that function
> doesn't deal with header in multi-segments.
> I think there's bug here.
>
> So I think you should only add a check in pkt_burst_prepare() in txonly().
> if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND) || txonly_multi_flow)
> + if (tx_pkt_seg_lengths[0] < 42) {
> + err_log;
> + return false;
> + }
> update_pkt_header(pkt, pkt_len);
Yes, I didn't notice the updating for the UDP header, but the bug first occurs in this function
copy_buf_to_pkt(&pkt_udp_hdr, sizeof(pkt_udp_hdr), pkt,
sizeof(struct rte_ether_hdr) +
sizeof(struct rte_ipv4_hdr));
not in update_pkt_header.
Here we expecting users should set minimum 42 byte for first segment seems ok,
But I think we putting the check in configuring the data space length of first segment is more graceful.
>
> As for timestamp, maybe refer to "pkt_copy_split" in csumonly.c is better? Copy
> the extra to the last segment if it's not enough. Not sure how to deal with this
> issue better.
>
> > */
> > tx_pkt_len = 0;
> > + tx_pkt_nb_min_segs = 0;
> > for (i = 0; i < nb_segs; i++) {
> > if (seg_lengths[i] > mbuf_data_size[0]) {
> > fprintf(stderr,
> > @@ -3849,11 +3850,16 @@ struct igb_ring_desc_16_bytes {
> > return;
> > }
> > tx_pkt_len = (uint16_t)(tx_pkt_len + seg_lengths[i]);
> > +
> > + if (!tx_pkt_nb_min_segs &&
> > + tx_pkt_len >= (sizeof(struct rte_ether_hdr) + 20 + 8 + 16))
> > + tx_pkt_nb_min_segs = i + 1;
> > }
> > - if (tx_pkt_len < (sizeof(struct rte_ether_hdr) + 20 + 8)) {
> > +
> > + if (!tx_pkt_nb_min_segs) {
> > fprintf(stderr, "total packet length=%u < %d - give up\n",
> > - (unsigned) tx_pkt_len,
> > - (int)(sizeof(struct rte_ether_hdr) + 20 + 8));
> > + (unsigned int) tx_pkt_len,
> > + (int)(sizeof(struct rte_ether_hdr) + 20 + 8 + 16));
> > return;
> > }
> >
> > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 6cbe9ba..c496e59 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -232,6 +232,11 @@ struct fwd_engine * fwd_engines[] = { };
> > uint8_t tx_pkt_nb_segs = 1; /**< Number of segments in TXONLY packets
> > */
> >
> > +/**< Minimum number of segments in TXONLY packets to accommodate all
> > +packet
> > + * headers.
> > + */
> > +uint8_t tx_pkt_nb_min_segs = 1;
> > +
> > enum tx_pkt_split tx_pkt_split = TX_PKT_SPLIT_OFF; /**< Split policy
> > for packets to TX. */
> >
> > diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
> > 16a3598..f5bc427 100644
> > --- a/app/test-pmd/testpmd.h
> > +++ b/app/test-pmd/testpmd.h
> > @@ -464,6 +464,11 @@ enum dcb_mode_enable extern uint16_t
> > tx_pkt_length; /**< Length of TXONLY packet */ extern uint16_t
> > tx_pkt_seg_lengths[RTE_MAX_SEGS_PER_PKT]; /**< Seg. lengths */ extern
> > uint8_t tx_pkt_nb_segs; /**< Number of segments in TX packets */
> > +
> > +/**< Minimum number of segments in TXONLY packets to accommodate all
> > +packet
> > + * headers.
> > + */
> > +extern uint8_t tx_pkt_nb_min_segs;
> > extern uint32_t tx_pkt_times_intra;
> > extern uint32_t tx_pkt_times_inter;
> >
> > diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index
> > aed820f..27e4458 100644
> > --- a/app/test-pmd/txonly.c
> > +++ b/app/test-pmd/txonly.c
> > @@ -195,8 +195,11 @@
> > uint32_t nb_segs, pkt_len;
> > uint8_t i;
> >
> > - if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND))
> > - nb_segs = rte_rand() % tx_pkt_nb_segs + 1;
> > + if (unlikely(tx_pkt_split == TX_PKT_SPLIT_RND) &&
> > + tx_pkt_nb_segs > tx_pkt_nb_min_segs)
> > + nb_segs = rte_rand() %
> > + (tx_pkt_nb_segs - tx_pkt_nb_min_segs + 1) +
> > + tx_pkt_nb_min_segs;
> > else
> > nb_segs = tx_pkt_nb_segs;
> >
> > --
> > 1.8.3.1
More information about the stable
mailing list