[dpdk-dev] [PATCH 2/4] net/mlx5: add support for Rx queue delay drop

Bing Zhao bingz at nvidia.com
Thu Nov 4 15:34:44 CET 2021


Hi David,

Many thanks for this comments. My answers are inline.

> -----Original Message-----
> From: David Marchand <david.marchand at redhat.com>
> Sent: Thursday, November 4, 2021 10:01 PM
> To: Bing Zhao <bingz at nvidia.com>
> Cc: Slava Ovsiienko <viacheslavo at nvidia.com>; Matan Azrad
> <matan at nvidia.com>; dev <dev at dpdk.org>; Raslan Darawsheh
> <rasland at nvidia.com>; NBU-Contact-Thomas Monjalon
> <thomas at monjalon.net>; Ori Kam <orika at nvidia.com>
> Subject: Re: [dpdk-dev] [PATCH 2/4] net/mlx5: add support for Rx
> queue delay drop
> 
> External email: Use caution opening links or attachments
> 
> 
> On Thu, Nov 4, 2021 at 12:27 PM Bing Zhao <bingz at nvidia.com> wrote:
> >
> > For an Ethernet RQ, packets received when receive WQEs are
> exhausted
> > are dropped. This behavior prevents slow or malicious software
> > entities at the host from affecting the network. While for hairpin
> > cases, even if there is no software involved during the packet
> > forwarding from Rx to Tx side, some hiccup in the hardware or back
> > pressure from Tx side may still cause the WQEs to be exhausted. In
> > certain scenarios it may be preferred to configure the device to
> avoid
> > such packet drops, assuming the posting of WQEs will resume
> shortly.
> >
> > To support this, a new devarg "delay_drop_en" is introduced, by
> > default, the delay drop is enabled for hairpin Rx queues and
> disabled
> > for standard Rx queues. This value is used as a bit mask:
> >   - bit 0: enablement of standard Rx queue
> >   - bit 1: enablement of hairpin Rx queue And this attribute will
> be
> > applied to all Rx queues of a device.
> 
> Rather than a devargs, why can't the driver use this option in the
> identified usecases where it makes sense?
> Here, hairpin.

In the patch set v2, the attribute for hairpin is also disabled, then the default behavior will remain the same as today. This is only some minor change but it may have some impact on the HW processing.
With this attribute ON for a specific queue, it will have the such impact:

Pros: If there is some hiccup in the SW / HW, or there is a burst and the SW is not fast enough to handle. Once the WQEs get exhausted in the queue, the packets will not be dropped immediately and held in the NIC. It gives more tolerance and make the queue work as a dropless queue.

Cons: While some packets are waiting for the available WQEs, the new packets maybe dropped if there is not enough space. Or some new packets may have a bigger latency since the previous ones are waiting. If the traffic exceeds the line rate or the SW is too slow to handle the incoming traffic, the packets will be dropped eventually. Some contexts are global and the waiting on one queue may have an impact on other queues.

So right now this devarg is to give the flexibility / ability for the application to verify and decide if this is needed in the real-life. Theoretically, this would help for most of the cases.

> 
> 
> --
> David Marchand

BR. Bing


More information about the dev mailing list