[dpdk-dev,3/9] mbuf: set mbuf fields while in pool

Message ID 1488966121-22853-4-git-send-email-olivier.matz@6wind.com (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Olivier Matz March 8, 2017, 9:41 a.m. UTC
  Set the value of m->refcnt to 1, m->nb_segs to 1 and m->next
to NULL when the mbuf is stored inside the mempool (unused).
This is done in rte_pktmbuf_prefree_seg(), before freeing or
recycling a mbuf.

Before this patch, the value of m->refcnt was expected to be 0
while in pool.

The objectives are:

- to avoid drivers to set m->next to NULL in the early Rx path, since
  this field is in the second 64B of the mbuf and its access could
  trigger a cache miss

- rationalize the behavior of raw_alloc/raw_free: one is now the
  symmetric of the other, and refcnt is never changed in these functions.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 drivers/net/mlx5/mlx5_rxtx.c     |  5 ++---
 drivers/net/mpipe/mpipe_tilegx.c |  1 +
 lib/librte_mbuf/rte_mbuf.c       |  2 ++
 lib/librte_mbuf/rte_mbuf.h       | 42 +++++++++++++++++++++++++++++-----------
 4 files changed, 36 insertions(+), 14 deletions(-)
  

Comments

Bruce Richardson March 31, 2017, 11:21 a.m. UTC | #1
On Wed, Mar 08, 2017 at 10:41:55AM +0100, Olivier Matz wrote:
> Set the value of m->refcnt to 1, m->nb_segs to 1 and m->next
> to NULL when the mbuf is stored inside the mempool (unused).
> This is done in rte_pktmbuf_prefree_seg(), before freeing or
> recycling a mbuf.
> 
> Before this patch, the value of m->refcnt was expected to be 0
> while in pool.
> 
> The objectives are:
> 
> - to avoid drivers to set m->next to NULL in the early Rx path, since
>   this field is in the second 64B of the mbuf and its access could
>   trigger a cache miss
> 
> - rationalize the behavior of raw_alloc/raw_free: one is now the
>   symmetric of the other, and refcnt is never changed in these functions.
> 
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> ---
>  drivers/net/mlx5/mlx5_rxtx.c     |  5 ++---
>  drivers/net/mpipe/mpipe_tilegx.c |  1 +
>  lib/librte_mbuf/rte_mbuf.c       |  2 ++
>  lib/librte_mbuf/rte_mbuf.h       | 42 +++++++++++++++++++++++++++++-----------
>  4 files changed, 36 insertions(+), 14 deletions(-)
> 
<snip>
>  /**
> @@ -1244,9 +1262,13 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
>  	__rte_mbuf_sanity_check(m, 0);
>  
>  	if (likely(rte_mbuf_refcnt_update(m, -1) == 0)) {
> -		/* if this is an indirect mbuf, it is detached. */
>  		if (RTE_MBUF_INDIRECT(m))
>  			rte_pktmbuf_detach(m);
> +
> +		m->next = NULL;
> +		m->nb_segs = 1;
> +		rte_mbuf_refcnt_set(m, 1);
> +
>  		return m;
>  	}
>  	return NULL;

Do we need to make this change to prefree_seg? If we update the detach
function to set the next point to null on detaching a segment, and if we
change the "free" function which frees a whole chain of mbufs, we should
be covered, should we not? If we are freeing a standalone segment, that
segment should already have it's nb_segs and next pointers correct.

/Bruce
  
Ananyev, Konstantin March 31, 2017, 11:51 a.m. UTC | #2
> -----Original Message-----
> From: Richardson, Bruce
> Sent: Friday, March 31, 2017 12:22 PM
> To: Olivier Matz <olivier.matz@6wind.com>
> Cc: dev@dpdk.org; Ananyev, Konstantin <konstantin.ananyev@intel.com>; mb@smartsharesystems.com; Chilikin, Andrey
> <andrey.chilikin@intel.com>; jblunck@infradead.org; nelio.laranjeiro@6wind.com; arybchenko@solarflare.com
> Subject: Re: [PATCH 3/9] mbuf: set mbuf fields while in pool
> 
> On Wed, Mar 08, 2017 at 10:41:55AM +0100, Olivier Matz wrote:
> > Set the value of m->refcnt to 1, m->nb_segs to 1 and m->next
> > to NULL when the mbuf is stored inside the mempool (unused).
> > This is done in rte_pktmbuf_prefree_seg(), before freeing or
> > recycling a mbuf.
> >
> > Before this patch, the value of m->refcnt was expected to be 0
> > while in pool.
> >
> > The objectives are:
> >
> > - to avoid drivers to set m->next to NULL in the early Rx path, since
> >   this field is in the second 64B of the mbuf and its access could
> >   trigger a cache miss
> >
> > - rationalize the behavior of raw_alloc/raw_free: one is now the
> >   symmetric of the other, and refcnt is never changed in these functions.
> >
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > ---
> >  drivers/net/mlx5/mlx5_rxtx.c     |  5 ++---
> >  drivers/net/mpipe/mpipe_tilegx.c |  1 +
> >  lib/librte_mbuf/rte_mbuf.c       |  2 ++
> >  lib/librte_mbuf/rte_mbuf.h       | 42 +++++++++++++++++++++++++++++-----------
> >  4 files changed, 36 insertions(+), 14 deletions(-)
> >
> <snip>
> >  /**
> > @@ -1244,9 +1262,13 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
> >  	__rte_mbuf_sanity_check(m, 0);
> >
> >  	if (likely(rte_mbuf_refcnt_update(m, -1) == 0)) {
> > -		/* if this is an indirect mbuf, it is detached. */
> >  		if (RTE_MBUF_INDIRECT(m))
> >  			rte_pktmbuf_detach(m);
> > +
> > +		m->next = NULL;
> > +		m->nb_segs = 1;
> > +		rte_mbuf_refcnt_set(m, 1);
> > +
> >  		return m;
> >  	}
> >  	return NULL;
> 
> Do we need to make this change to prefree_seg? If we update the detach
> function to set the next point to null on detaching a segment, and if we
> change the "free" function which frees a whole chain of mbufs, we should
> be covered, should we not? If we are freeing a standalone segment, that
> segment should already have it's nb_segs and next pointers correct.

detach() is invoked only for indirect mbufs.
We can have a chain of direct mbufs too.
About free() - most PMD use either rte_pktmbuf_free_seg()
or rte_pktmbuf_prefree_seg();rte_mempool_put_bulk(); directly.
Konstantin
  

Patch

diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 41a5bb2..fc59544 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -1398,7 +1398,8 @@  mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 			while (pkt != seg) {
 				assert(pkt != (*rxq->elts)[idx]);
 				rep = NEXT(pkt);
-				rte_mbuf_refcnt_set(pkt, 0);
+				NEXT(pkt) = NULL;
+				NB_SEGS(pkt) = 1;
 				rte_mbuf_raw_free(pkt);
 				pkt = rep;
 			}
@@ -1409,13 +1410,11 @@  mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
 			len = mlx5_rx_poll_len(rxq, cqe, cqe_cnt,
 					       &rss_hash_res);
 			if (!len) {
-				rte_mbuf_refcnt_set(rep, 0);
 				rte_mbuf_raw_free(rep);
 				break;
 			}
 			if (unlikely(len == -1)) {
 				/* RX error, packet is likely too large. */
-				rte_mbuf_refcnt_set(rep, 0);
 				rte_mbuf_raw_free(rep);
 				++rxq->stats.idropped;
 				goto skip;
diff --git a/drivers/net/mpipe/mpipe_tilegx.c b/drivers/net/mpipe/mpipe_tilegx.c
index 536b8ea..0135e2f 100644
--- a/drivers/net/mpipe/mpipe_tilegx.c
+++ b/drivers/net/mpipe/mpipe_tilegx.c
@@ -557,6 +557,7 @@  mpipe_recv_flush_stack(struct mpipe_dev_priv *priv)
 		mbuf->packet_type = 0;
 		mbuf->data_len    = 0;
 		mbuf->pkt_len     = 0;
+		mbuf->next        = NULL;
 
 		rte_mbuf_raw_free(mbuf);
 	}
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 72ad91e..0acc810 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -145,6 +145,8 @@  rte_pktmbuf_init(struct rte_mempool *mp,
 	m->pool = mp;
 	m->nb_segs = 1;
 	m->port = 0xff;
+	rte_mbuf_refcnt_set(m, 1);
+	m->next = NULL;
 }
 
 /* helper to create a mbuf pool */
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 575dc9d..b4fe786 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -768,6 +768,11 @@  rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header);
  * initializing all the required fields. See rte_pktmbuf_reset().
  * For standard needs, prefer rte_pktmbuf_alloc().
  *
+ * The caller can expect that the following fields of the mbuf structure
+ * are initialized: buf_addr, buf_physaddr, buf_len, refcnt=1, nb_segs=1,
+ * next=NULL, pool, priv_size. The other fields must be initialized
+ * by the caller.
+ *
  * @param mp
  *   The mempool from which mbuf is allocated.
  * @return
@@ -782,8 +787,9 @@  static inline struct rte_mbuf *rte_mbuf_raw_alloc(struct rte_mempool *mp)
 	if (rte_mempool_get(mp, &mb) < 0)
 		return NULL;
 	m = (struct rte_mbuf *)mb;
-	RTE_ASSERT(rte_mbuf_refcnt_read(m) == 0);
-	rte_mbuf_refcnt_set(m, 1);
+	RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1);
+	RTE_ASSERT(m->next == NULL);
+	RTE_ASSERT(m->nb_segs == 1);
 	__rte_mbuf_sanity_check(m, 0);
 
 	return m;
@@ -792,8 +798,13 @@  static inline struct rte_mbuf *rte_mbuf_raw_alloc(struct rte_mempool *mp)
 /**
  * Put mbuf back into its original mempool.
  *
- * The caller must ensure that the mbuf is direct and that the
- * reference counter is 0.
+ * The caller must ensure that the mbuf is direct and properly
+ * reinitialized (refcnt=1, next=NULL, nb_segs=1), as done by
+ * rte_pktmbuf_prefree_seg().
+ *
+ * This function should be used with care, when optimization is
+ * required. For standard needs, prefer rte_pktmbuf_free() or
+ * rte_pktmbuf_free_seg().
  *
  * @param m
  *   The mbuf to be freed.
@@ -802,13 +813,16 @@  static inline void __attribute__((always_inline))
 rte_mbuf_raw_free(struct rte_mbuf *m)
 {
 	RTE_ASSERT(RTE_MBUF_DIRECT(m));
-	RTE_ASSERT(rte_mbuf_refcnt_read(m) == 0);
+	RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1);
+	RTE_ASSERT(m->next == NULL);
+	RTE_ASSERT(m->nb_segs == 1);
+	__rte_mbuf_sanity_check(m, 0);
 	rte_mempool_put(m->pool, m);
 }
 
 /* compat with older versions */
 __rte_deprecated
-static inline void __attribute__((always_inline))
+static inline void
 __rte_mbuf_raw_free(struct rte_mbuf *m)
 {
 	rte_mbuf_raw_free(m);
@@ -1219,8 +1233,12 @@  static inline void rte_pktmbuf_detach(struct rte_mbuf *m)
 	m->data_len = 0;
 	m->ol_flags = 0;
 
-	if (rte_mbuf_refcnt_update(md, -1) == 0)
+	if (rte_mbuf_refcnt_update(md, -1) == 0) {
+		md->next = NULL;
+		md->nb_segs = 1;
+		rte_mbuf_refcnt_set(md, 1);
 		rte_mbuf_raw_free(md);
+	}
 }
 
 /**
@@ -1244,9 +1262,13 @@  rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
 	__rte_mbuf_sanity_check(m, 0);
 
 	if (likely(rte_mbuf_refcnt_update(m, -1) == 0)) {
-		/* if this is an indirect mbuf, it is detached. */
 		if (RTE_MBUF_INDIRECT(m))
 			rte_pktmbuf_detach(m);
+
+		m->next = NULL;
+		m->nb_segs = 1;
+		rte_mbuf_refcnt_set(m, 1);
+
 		return m;
 	}
 	return NULL;
@@ -1273,10 +1295,8 @@  static inline void __attribute__((always_inline))
 rte_pktmbuf_free_seg(struct rte_mbuf *m)
 {
 	m = rte_pktmbuf_prefree_seg(m);
-	if (likely(m != NULL)) {
-		m->next = NULL;
+	if (likely(m != NULL))
 		rte_mbuf_raw_free(m);
-	}
 }
 
 /**