[dpdk-dev] [PATCH v1] ixgbe_pmd: forbid tx_rs_thresh above 1 for all NICs but 82598

Vladislav Zolotarov vladz at cloudius-systems.com
Fri Sep 11 18:13:04 CEST 2015


On Sep 11, 2015 7:00 PM, "Richardson, Bruce" <bruce.richardson at intel.com>
wrote:
>
>
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Vladislav Zolotarov
> > Sent: Friday, September 11, 2015 4:13 PM
> > To: Thomas Monjalon
> > Cc: dev at dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH v1] ixgbe_pmd: forbid tx_rs_thresh above
1
> > for all NICs but 82598
> >
> > On Sep 11, 2015 5:55 PM, "Thomas Monjalon" <thomas.monjalon at 6wind.com>
> > wrote:
> > >
> > > 2015-09-11 17:47, Avi Kivity:
> > > > On 09/11/2015 05:25 PM, didier.pallard wrote:
> > > > > On 08/25/2015 08:52 PM, Vlad Zolotarov wrote:
> > > > >>
> > > > >> Helin, the issue has been seen on x540 devices. Pls., see a
> > > > >> chapter
> > > > >> 7.2.1.1 of x540 devices spec:
> > > > >>
> > > > >> A packet (or multiple packets in transmit segmentation) can span
> > > > >> any number of buffers (and their descriptors) up to a limit of 40
> > > > >> minus WTHRESH minus 2 (see Section 7.2.3.3 for Tx Ring details
> > > > >> and section Section 7.2.3.5.1 for WTHRESH details). For best
> > > > >> performance it is recommended to minimize the number of buffers
> > > > >> as possible.
> > > > >>
> > > > >> Could u, pls., clarify why do u think that the maximum number of
> > > > >> data buffers is limited by 8?
> > > > >>
> > > > >> thanks,
> > > > >> vlad
> > > > >
> > > > > Hi vlad,
> > > > >
> > > > > Documentation states that a packet (or multiple packets in
> > > > > transmit
> > > > > segmentation) can span any number of buffers (and their
> > > > > descriptors) up to a limit of 40 minus WTHRESH minus 2.
> > > > >
> > > > > Shouldn't there be a test in transmit function that drops properly
> > > > > the mbufs with a too large number of segments, while incrementing
> > > > > a statistic; otherwise transmit function may be locked by the
> > > > > faulty packet without notification.
> > > > >
> > > >
> > > > What we proposed is that the pmd expose to dpdk, and dpdk expose to
> > > > the application, an mbuf check function.  This way applications that
> > > > can generate complex packets can verify that the device will be able
> > > > to process them, and applications that only generate simple mbufs
> > > > can avoid the overhead by not calling the function.
> > >
> > > More than a check, it should be exposed as a capability of the port.
> > > Anyway, if the application sends too much segments, the driver must
> > > drop it to avoid hang, and maintain a dedicated statistic counter to
> > > allow easy debugging.
> >
> > I agree with Thomas - this should not be optional. Malformed packets
> > should be dropped. In the icgbe case it's a very simple test - it's a
> > single branch per packet so i doubt that it could impose any measurable
> > performance degradation.
> >
> Actually, it could very well do - we'd have to test it. For the vector IO
> paths, every additional cycle in the RX or TX paths causes a noticeable
perf
> drop.

Well if your application is willing to know all different HW limitations
then u may not need it. However usually application doesn't want to know
the HW technical details. And it this case ignoring them may cause HW to
hang.

Of course, if your app always sends single fragment packets of less than
1500 bytes then u r right and u will most likely not hit any HW limitation,
however what i have in mind is a full featured case where packets are bit
more big and complicated and where a single branch per packet will change
nothing. This is regarding 40 segments case.

In regard to the RS bit - this is related to any packet and according to
spec it should be set in every packet.

>
> /Bruce


More information about the dev mailing list