app/pdump: enforcing pdump to use sw mempool

Message ID 1552663632-18742-1-git-send-email-hkalra@marvell.com (mailing list archive)
State Changes Requested, archived
Delegated to: Thomas Monjalon
Headers
Series app/pdump: enforcing pdump to use sw mempool |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK
ci/mellanox-Performance-Testing success Performance Testing PASS
ci/intel-Performance-Testing success Performance Testing PASS

Commit Message

Harman Kalra March 15, 2019, 3:27 p.m. UTC
  Since pdump uses SW rings to manage packets hence
pdump should use SW ring mempool for managing its
own copy of packets.

Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
 app/pdump/main.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
  

Comments

Thomas Monjalon July 4, 2019, 4:29 p.m. UTC | #1
15/03/2019 16:27, Harman Kalra:
> Since pdump uses SW rings to manage packets hence
> pdump should use SW ring mempool for managing its
> own copy of packets.

I'm not sure to understand the reasoning.
Reshma, Olivier, Andrew, any opinion?

Let's take a decision for this very old patch.
  
Olivier Matz July 5, 2019, 1:48 p.m. UTC | #2
Hi,

On Thu, Jul 04, 2019 at 06:29:25PM +0200, Thomas Monjalon wrote:
> 15/03/2019 16:27, Harman Kalra:
> > Since pdump uses SW rings to manage packets hence
> > pdump should use SW ring mempool for managing its
> > own copy of packets.
> 
> I'm not sure to understand the reasoning.
> Reshma, Olivier, Andrew, any opinion?
> 
> Let's take a decision for this very old patch.

From what I understand, many mempools of packets are created, to
store the copy of dumped packets. I suppose that it may not be
possible to create as many mempools by using the "best" mbuf pool
(from rte_mbuf_best_mempool_ops()).

Using a "ring_mp_mc" as mempool ops should always be possible.
I think it would be safer to use "ring_mp_mc" instead of
CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS, because the latter could be
overriden on a specific platform.

Olivier
  
Harman Kalra July 5, 2019, 2:39 p.m. UTC | #3
On Fri, Jul 05, 2019 at 03:48:01PM +0200, Olivier Matz wrote:
> External Email
> 
> ----------------------------------------------------------------------
> Hi,
> 
> On Thu, Jul 04, 2019 at 06:29:25PM +0200, Thomas Monjalon wrote:
> > 15/03/2019 16:27, Harman Kalra:
> > > Since pdump uses SW rings to manage packets hence
> > > pdump should use SW ring mempool for managing its
> > > own copy of packets.
> > 
> > I'm not sure to understand the reasoning.
> > Reshma, Olivier, Andrew, any opinion?
> > 
> > Let's take a decision for this very old patch.
> 
> From what I understand, many mempools of packets are created, to
> store the copy of dumped packets. I suppose that it may not be
> possible to create as many mempools by using the "best" mbuf pool
> (from rte_mbuf_best_mempool_ops()).
> 
> Using a "ring_mp_mc" as mempool ops should always be possible.
> I think it would be safer to use "ring_mp_mc" instead of
> CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS, because the latter could be
> overriden on a specific platform.
> 
> Olivier

Following are some reasons for this patch:
1. As we all know dpdk-pdump app creates a mempool for receiving packets (from primary process) into the mbufs, which would get tx'ed into pcap device and freed. Using hw mempool for creating mempool for dpdk-pdump mbufs was generating segmentation fault because hw mempool vfio is setup by primary process and secondary will not have access to its bar regions.

2. Setting up a seperate hw mempool vfio device for secondary generates following error:
"cannot find TAILQ entry for PCI device!"
http://git.dpdk.org/dpdk/tree/drivers/bus/pci/linux/pci_vfio.c#n823
which means secondary cannot setup a new device which is not set by primary.

3. Since pdump creates mempool for its own local mbufs, we could not feel the requirement for hw mempool, as SW mempool in our opinion is capable enough for working in all conditions.
  
Thomas Monjalon July 5, 2019, 3:09 p.m. UTC | #4
05/07/2019 16:39, Harman Kalra:
> On Fri, Jul 05, 2019 at 03:48:01PM +0200, Olivier Matz wrote:
> > On Thu, Jul 04, 2019 at 06:29:25PM +0200, Thomas Monjalon wrote:
> > > 15/03/2019 16:27, Harman Kalra:
> > > > Since pdump uses SW rings to manage packets hence
> > > > pdump should use SW ring mempool for managing its
> > > > own copy of packets.
> > > 
> > > I'm not sure to understand the reasoning.
> > > Reshma, Olivier, Andrew, any opinion?
> > > 
> > > Let's take a decision for this very old patch.
> > 
> > From what I understand, many mempools of packets are created, to
> > store the copy of dumped packets. I suppose that it may not be
> > possible to create as many mempools by using the "best" mbuf pool
> > (from rte_mbuf_best_mempool_ops()).
> > 
> > Using a "ring_mp_mc" as mempool ops should always be possible.
> > I think it would be safer to use "ring_mp_mc" instead of
> > CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS, because the latter could be
> > overriden on a specific platform.
> > 
> > Olivier
> 
> Following are some reasons for this patch:
> 1. As we all know dpdk-pdump app creates a mempool for receiving packets (from primary process) into the mbufs, which would get tx'ed into pcap device and freed. Using hw mempool for creating mempool for dpdk-pdump mbufs was generating segmentation fault because hw mempool vfio is setup by primary process and secondary will not have access to its bar regions.
> 
> 2. Setting up a seperate hw mempool vfio device for secondary generates following error:
> "cannot find TAILQ entry for PCI device!"
> http://git.dpdk.org/dpdk/tree/drivers/bus/pci/linux/pci_vfio.c#n823
> which means secondary cannot setup a new device which is not set by primary.
> 
> 3. Since pdump creates mempool for its own local mbufs, we could not feel the requirement for hw mempool, as SW mempool in our opinion is capable enough for working in all conditions.

OK
From the commit log, it is just missing to explain
that HW mempool cannot be used in secondary if initialized in primary,
and cannot be initialized in secondary process.
Then it will become clear :)

Please, do you want to reword a v2?
  

Patch

diff --git a/app/pdump/main.c b/app/pdump/main.c
index ccf2a1d2f..d0e342645 100644
--- a/app/pdump/main.c
+++ b/app/pdump/main.c
@@ -598,11 +598,12 @@  create_mp_ring_vdev(void)
 		mbuf_pool = rte_mempool_lookup(mempool_name);
 		if (mbuf_pool == NULL) {
 			/* create mempool */
-			mbuf_pool = rte_pktmbuf_pool_create(mempool_name,
+			mbuf_pool = rte_pktmbuf_pool_create_by_ops(mempool_name,
 					pt->total_num_mbufs,
 					MBUF_POOL_CACHE_SIZE, 0,
 					pt->mbuf_data_size,
-					rte_socket_id());
+					rte_socket_id(),
+					RTE_MBUF_DEFAULT_MEMPOOL_OPS);
 			if (mbuf_pool == NULL) {
 				cleanup_rings();
 				rte_exit(EXIT_FAILURE,