[dpdk-dev] [PATCH 5/8] net/mlx4: merge Tx queue rings management

Adrien Mazarguil adrien.mazarguil at 6wind.com
Wed Dec 6 13:09:53 CET 2017


On Wed, Dec 06, 2017 at 11:43:25AM +0000, Matan Azrad wrote:
> Hi Adrien
> 
> > -----Original Message-----
> > From: Adrien Mazarguil [mailto:adrien.mazarguil at 6wind.com]
> > Sent: Wednesday, December 6, 2017 12:59 PM
> > To: Matan Azrad <matan at mellanox.com>
> > Cc: dev at dpdk.org
> > Subject: Re: [PATCH 5/8] net/mlx4: merge Tx queue rings management
> > 
> > On Tue, Nov 28, 2017 at 12:19:27PM +0000, Matan Azrad wrote:
> > > The Tx queue send ring was managed by Tx block head,tail,count and
> > > mask management variables which were used for managing the send
> > queue
> > > remain space and next places of empty or completted work queue entries.
> > 
> > completted => completed
> > 
> > >
> > > This method suffered from an actual addresses recalculation per
> > > packet, an unnecessary Tx block based calculations and an expensive
> > > dual management of Tx rings.
> > >
> > > Move send queue ring calculation to be based on actual addresses while
> > > managing it by descriptors ring indexes.
> > >
> > > Add new work queue entry pointer to the descriptor element to hold the
> > > appropriate entry in the send queue.
> > >
> > > Signed-off-by: Matan Azrad <matan at mellanox.com>
> > > ---
> > >  drivers/net/mlx4/mlx4_prm.h  |  20 ++--  drivers/net/mlx4/mlx4_rxtx.c
> > > | 241 +++++++++++++++++++------------------------
> > >  drivers/net/mlx4/mlx4_rxtx.h |   1 +
> > >  drivers/net/mlx4/mlx4_txq.c  |  23 +++--
> > >  4 files changed, 126 insertions(+), 159 deletions(-)
> > >
> > > diff --git a/drivers/net/mlx4/mlx4_prm.h b/drivers/net/mlx4/mlx4_prm.h
> > > index fcc7c12..2ca303a 100644
> > > --- a/drivers/net/mlx4/mlx4_prm.h
> > > +++ b/drivers/net/mlx4/mlx4_prm.h
<snip>
> > > {  struct mlx4_sq {
> > >  	volatile uint8_t *buf; /**< SQ buffer. */
> > >  	volatile uint8_t *eob; /**< End of SQ buffer */
> > > -	uint32_t head; /**< SQ head counter in units of TXBBS. */
> > > -	uint32_t tail; /**< SQ tail counter in units of TXBBS. */
> > > -	uint32_t txbb_cnt; /**< Num of WQEBB in the Q (should be ^2). */
> > > -	uint32_t txbb_cnt_mask; /**< txbbs_cnt mask (txbb_cnt is ^2). */
> > > -	uint32_t headroom_txbbs; /**< Num of txbbs that should be kept
> > free. */
> > > +	uint32_t size; /**< SQ size includes headroom. */
> > > +	int32_t remain_size; /**< Remain WQE size in SQ. */
> > 
> > Remain => Remaining?
> > 
> OK
> > By "size", do you mean "room" as there could be several WQEs in there?
> > 
> Size in bytes.
> remaining size | remaining space | remaining room | remaining bytes , What are you preferred?

I think this should fully clarify:

 Remaining WQE room in SQ (bytes).

> 
> > Note before reviewing the rest of this patch, the fact it's a signed integer
> > bothers me; it's probably a mistake.
> 
> There is place in the code where this variable can used for signed calculations.

My point is these signed calculations shouldn't be signed in the first
place. A buffer size cannot be negative.

> 
> > You should standardize on unsigned values everywhere.
> 
> Why?
> Each field with the most appropriate type.

Because you used the wrong type everywhere else. The root cause seems to be
with:

 #define MLX4_MAX_WQE_SIZE 512

Which must either be cast when used or redefined like:

 #define MLX4_MAX_WQE_SIZE 512u

Or even:

 #define MLX4_MAX_WQE_SIZE UINT32_C(512)

<snip>
> > > a/drivers/net/mlx4/mlx4_rxtx.c b/drivers/net/mlx4/mlx4_rxtx.c index
> > > b9cb2fc..0a8ef93 100644
> > > --- a/drivers/net/mlx4/mlx4_rxtx.c
> > > +++ b/drivers/net/mlx4/mlx4_rxtx.c
<snip>
> > > @@ -421,41 +403,27 @@ struct pv {
> > >  	return buf->pool;
> > >  }
> > >
> > > -static int
> > > +static volatile struct mlx4_wqe_ctrl_seg *
> > >  mlx4_tx_burst_segs(struct rte_mbuf *buf, struct txq *txq,
> > > -		   volatile struct mlx4_wqe_ctrl_seg **pctrl)
> > > +		   volatile struct mlx4_wqe_ctrl_seg *ctrl)
> > 
> > Can you use this opportunity to document this function?
> > 
> Sure, new patch for this?

You can make it part of this one if you prefer, no problem either way.

-- 
Adrien Mazarguil
6WIND


More information about the dev mailing list