[dpdk-dev] [PATCH v1] ixgbe_pmd: forbid tx_rs_thresh above 1 for all NICs but 82598

Richardson, Bruce bruce.richardson at intel.com
Fri Sep 11 18:07:06 CEST 2015



> -----Original Message-----
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Vladislav Zolotarov
> Sent: Friday, September 11, 2015 5:04 PM
> To: Avi Kivity
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v1] ixgbe_pmd: forbid tx_rs_thresh above 1
> for all NICs but 82598
> 
> On Sep 11, 2015 6:43 PM, "Avi Kivity" <avi at cloudius-systems.com> wrote:
> >
> > On 09/11/2015 06:12 PM, Vladislav Zolotarov wrote:
> >>
> >>
> >> On Sep 11, 2015 5:55 PM, "Thomas Monjalon"
> >> <thomas.monjalon at 6wind.com>
> wrote:
> >> >
> >> > 2015-09-11 17:47, Avi Kivity:
> >> > > On 09/11/2015 05:25 PM, didier.pallard wrote:
> >> > > > On 08/25/2015 08:52 PM, Vlad Zolotarov wrote:
> >> > > >>
> >> > > >> Helin, the issue has been seen on x540 devices. Pls., see a
> chapter
> >> > > >> 7.2.1.1 of x540 devices spec:
> >> > > >>
> >> > > >> A packet (or multiple packets in transmit segmentation) can
> >> > > >> span
> any
> >> > > >> number of
> >> > > >> buffers (and their descriptors) up to a limit of 40 minus
> >> > > >> WTHRESH minus 2 (see Section 7.2.3.3 for Tx Ring details and
> >> > > >> section Section 7.2.3.5.1
> for
> >> > > >> WTHRESH
> >> > > >> details). For best performance it is recommended to minimize
> >> > > >> the number of buffers as possible.
> >> > > >>
> >> > > >> Could u, pls., clarify why do u think that the maximum number
> >> > > >> of
> data
> >> > > >> buffers is limited by 8?
> >> > > >>
> >> > > >> thanks,
> >> > > >> vlad
> >> > > >
> >> > > > Hi vlad,
> >> > > >
> >> > > > Documentation states that a packet (or multiple packets in
> >> > > > transmit
> >> > > > segmentation) can span any number of buffers (and their
> >> > > > descriptors) up to a limit of 40 minus WTHRESH minus 2.
> >> > > >
> >> > > > Shouldn't there be a test in transmit function that drops
> >> > > > properly
> the
> >> > > > mbufs with a too large number of segments, while incrementing a
> >> > > > statistic; otherwise transmit
> function
> >> > > > may be locked by the faulty packet without notification.
> >> > > >
> >> > >
> >> > > What we proposed is that the pmd expose to dpdk, and dpdk expose
> >> > > to
> the
> >> > > application, an mbuf check function.  This way applications that
> >> > > can generate complex packets can verify that the device will be
> >> > > able to process them, and applications that only generate simple
> >> > > mbufs can
> avoid
> >> > > the overhead by not calling the function.
> >> >
> >> > More than a check, it should be exposed as a capability of the port.
> >> > Anyway, if the application sends too much segments, the driver must
> >> > drop it to avoid hang, and maintain a dedicated statistic counter
> >> > to
> allow
> >> > easy debugging.
> >>
> >> I agree with Thomas - this should not be optional. Malformed packets
> should be dropped. In the icgbe case it's a very simple test - it's a
> single branch per packet so i doubt that it could impose any measurable
> performance degradation.
> >>
> >>
> >
> > A drop allows the application no chance to recover.  The driver must
> either provide the ability for the application to know that it cannot
> accept the packet, or it must fix it up itself.
> 
> An appropriate statistics counter would be a perfect tool to detect such
> issues. Knowingly sending a packet that will cause a HW to hang is not
> acceptable.

I would agree. Drivers should provide a function to query the max number of
segments they can accept and the driver should be able to discard any packets
exceeding that number, and just track it via a stat.

/Bruce


More information about the dev mailing list