[v2,26/37] net/mvpp2: introduce fixup for fifo overrun

Message ID 20210122191925.24308-27-lironh@marvell.com (mailing list archive)
State Superseded, archived
Delegated to: Jerin Jacob
Headers
Series net/mvpp2: misc updates |

Checks

Context Check Description
ci/checkpatch success coding style OK

Commit Message

Liron Himi Jan. 22, 2021, 7:19 p.m. UTC
  From: Liron Himi <lironh@marvell.com>

Currently the HW is configured with only one pool which its
buffer size may be larger than the rx-fifo-size.
In that situation, frame size larger than the fifo-size
is gets dropped due to fifo overrun.
this is cause because the HW works in cut-through mode which
waits to have in the fifo at least the amount of bytes as define
in the smallest pool's buffer size.

This patch add a dummy pool which its buffer size
is very small (smaller than 64B frame). this tricks the HW and
any frame size is gets passed from the FIFO to the PP2.

Signed-off-by: Liron Himi <lironh@marvell.com>
---
 drivers/net/mvpp2/mrvl_ethdev.c | 93 +++++++++++++++++++++++++++------
 drivers/net/mvpp2/mrvl_ethdev.h |  2 +
 drivers/net/mvpp2/mrvl_qos.c    |  1 +
 3 files changed, 81 insertions(+), 15 deletions(-)
  

Comments

Ferruh Yigit Jan. 26, 2021, 11:49 p.m. UTC | #1
On 1/22/2021 7:19 PM, lironh@marvell.com wrote:
> From: Liron Himi <lironh@marvell.com>
> 
> Currently the HW is configured with only one pool which its
> buffer size may be larger than the rx-fifo-size.
> In that situation, frame size larger than the fifo-size
> is gets dropped due to fifo overrun.
> this is cause because the HW works in cut-through mode which
> waits to have in the fifo at least the amount of bytes as define
> in the smallest pool's buffer size.
> 
> This patch add a dummy pool which its buffer size
> is very small (smaller than 64B frame). this tricks the HW and
> any frame size is gets passed from the FIFO to the PP2.
> 
> Signed-off-by: Liron Himi <lironh@marvell.com>

so this is fixing the FIFO overrun, can you please provide the fixes line?
And should this patch backported?
  
Liron Himi Jan. 27, 2021, 2:08 p.m. UTC | #2
Liron Himi
Staff Software Engineer

 

Park Azorim, Kyriat Arie, Petah Tikva, 49527, Israel
Mobile: +972.52.3329169

-----Original Message-----
From: Ferruh Yigit <ferruh.yigit@intel.com> 
Sent: Wednesday, 27 January 2021 01:50
To: Liron Himi <lironh@marvell.com>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>
Cc: dev@dpdk.org
Subject: [EXT] Re: [dpdk-dev] [PATCH v2 26/37] net/mvpp2: introduce fixup for fifo overrun

External Email

----------------------------------------------------------------------
On 1/22/2021 7:19 PM, lironh@marvell.com wrote:
> From: Liron Himi <lironh@marvell.com>
> 
> Currently the HW is configured with only one pool which its buffer 
> size may be larger than the rx-fifo-size.
> In that situation, frame size larger than the fifo-size is gets 
> dropped due to fifo overrun.
> this is cause because the HW works in cut-through mode which waits to 
> have in the fifo at least the amount of bytes as define in the 
> smallest pool's buffer size.
> 
> This patch add a dummy pool which its buffer size is very small 
> (smaller than 64B frame). this tricks the HW and any frame size is 
> gets passed from the FIFO to the PP2.
> 
> Signed-off-by: Liron Himi <lironh@marvell.com>

so this is fixing the FIFO overrun, can you please provide the fixes line?
[L.H.] it is kind of combination of HW fifo size (which defined by kernel driver), given buffer size and incoming pkt size. I don't think I can point to a line in DPDK driver code that this patch is fixing.
it is a kind of WA for a HW issue.

And should this patch backported?
[L.H.] it cannot be backported as it depends on MUSDK api change.
  
Ferruh Yigit Jan. 27, 2021, 2:34 p.m. UTC | #3
On 1/27/2021 2:08 PM, Liron Himi wrote:
> 
> 
> Liron Himi
> Staff Software Engineer
> 
>   
> 
> Park Azorim, Kyriat Arie, Petah Tikva, 49527, Israel
> Mobile: +972.52.3329169
> 
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Wednesday, 27 January 2021 01:50
> To: Liron Himi <lironh@marvell.com>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>
> Cc: dev@dpdk.org
> Subject: [EXT] Re: [dpdk-dev] [PATCH v2 26/37] net/mvpp2: introduce fixup for fifo overrun
> 
> External Email
> 
> ----------------------------------------------------------------------
> On 1/22/2021 7:19 PM, lironh@marvell.com wrote:
>> From: Liron Himi <lironh@marvell.com>
>>
>> Currently the HW is configured with only one pool which its buffer
>> size may be larger than the rx-fifo-size.
>> In that situation, frame size larger than the fifo-size is gets
>> dropped due to fifo overrun.
>> this is cause because the HW works in cut-through mode which waits to
>> have in the fifo at least the amount of bytes as define in the
>> smallest pool's buffer size.
>>
>> This patch add a dummy pool which its buffer size is very small
>> (smaller than 64B frame). this tricks the HW and any frame size is
>> gets passed from the FIFO to the PP2.
>>
>> Signed-off-by: Liron Himi <lironh@marvell.com>
> 
> so this is fixing the FIFO overrun, can you please provide the fixes line?
> [L.H.] it is kind of combination of HW fifo size (which defined by kernel driver), given buffer size and incoming pkt size. I don't think I can point to a line in DPDK driver code that this patch is fixing.
> it is a kind of WA for a HW issue.
> 

Is HW FIFO size or HW behavior (to wait at least smallest pool's buffer size) 
changed with recent kernel driver or MUSDK to cause this problem? If so can you 
please mention/reference that change in the commit log?

> And should this patch backported?
> [L.H.] it cannot be backported as it depends on MUSDK api change.
> 

Is the fix or problem depends on the MUSDK API change? If the fix has a 
dependency will this be a problem, since it means latest driver won't be usable 
with old MUSDK version?
Can you please clarify the dependency in the commit log?

Thanks,
ferruh
  
Liron Himi Jan. 27, 2021, 2:46 p.m. UTC | #4
-----Original Message-----
From: Ferruh Yigit <ferruh.yigit@intel.com> 
Sent: Wednesday, 27 January 2021 16:35
To: Liron Himi <lironh@marvell.com>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>
Cc: dev@dpdk.org
Subject: Re: [EXT] Re: [dpdk-dev] [PATCH v2 26/37] net/mvpp2: introduce fixup for fifo overrun

On 1/27/2021 2:08 PM, Liron Himi wrote:
> 
> 
> Liron Himi
> Staff Software Engineer
> 
>   
> 
> Park Azorim, Kyriat Arie, Petah Tikva, 49527, Israel
> Mobile: +972.52.3329169
> 
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Wednesday, 27 January 2021 01:50
> To: Liron Himi <lironh@marvell.com>; Jerin Jacob Kollanukkaran 
> <jerinj@marvell.com>
> Cc: dev@dpdk.org
> Subject: [EXT] Re: [dpdk-dev] [PATCH v2 26/37] net/mvpp2: introduce 
> fixup for fifo overrun
> 
> External Email
> 
> ----------------------------------------------------------------------
> On 1/22/2021 7:19 PM, lironh@marvell.com wrote:
>> From: Liron Himi <lironh@marvell.com>
>>
>> Currently the HW is configured with only one pool which its buffer 
>> size may be larger than the rx-fifo-size.
>> In that situation, frame size larger than the fifo-size is gets 
>> dropped due to fifo overrun.
>> this is cause because the HW works in cut-through mode which waits to 
>> have in the fifo at least the amount of bytes as define in the 
>> smallest pool's buffer size.
>>
>> This patch add a dummy pool which its buffer size is very small 
>> (smaller than 64B frame). this tricks the HW and any frame size is 
>> gets passed from the FIFO to the PP2.
>>
>> Signed-off-by: Liron Himi <lironh@marvell.com>
> 
> so this is fixing the FIFO overrun, can you please provide the fixes line?
> [L.H.] it is kind of combination of HW fifo size (which defined by kernel driver), given buffer size and incoming pkt size. I don't think I can point to a line in DPDK driver code that this patch is fixing.
> it is a kind of WA for a HW issue.
> 

Is HW FIFO size or HW behavior (to wait at least smallest pool's buffer size) changed with recent kernel driver or MUSDK to cause this problem? If so can you please mention/reference that change in the commit log?
[L.H.] I don't think it was related to a change. But this combination was just tested by our QA team.
I think it may have more affect when the buffer size is of 9K which in some cases may exceed the fifo size of a specific port.

> And should this patch backported?
> [L.H.] it cannot be backported as it depends on MUSDK api change.
> 

Is the fix or problem depends on the MUSDK API change? If the fix has a dependency will this be a problem, since it means latest driver won't be usable with old MUSDK version?
Can you please clarify the dependency in the commit log?
[L.H.] I already updated the doc that latest driver (including meson stuff) required new MUSDK version.

Thanks,
ferruh
  
Ferruh Yigit Jan. 27, 2021, 2:57 p.m. UTC | #5
On 1/27/2021 2:46 PM, Liron Himi wrote:
> 
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Wednesday, 27 January 2021 16:35
> To: Liron Himi <lironh@marvell.com>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>
> Cc: dev@dpdk.org
> Subject: Re: [EXT] Re: [dpdk-dev] [PATCH v2 26/37] net/mvpp2: introduce fixup for fifo overrun
> 
> On 1/27/2021 2:08 PM, Liron Himi wrote:
>>
>>
>> Liron Himi
>> Staff Software Engineer
>>
>>    
>>
>> Park Azorim, Kyriat Arie, Petah Tikva, 49527, Israel
>> Mobile: +972.52.3329169
>>
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Sent: Wednesday, 27 January 2021 01:50
>> To: Liron Himi <lironh@marvell.com>; Jerin Jacob Kollanukkaran
>> <jerinj@marvell.com>
>> Cc: dev@dpdk.org
>> Subject: [EXT] Re: [dpdk-dev] [PATCH v2 26/37] net/mvpp2: introduce
>> fixup for fifo overrun
>>
>> External Email
>>
>> ----------------------------------------------------------------------
>> On 1/22/2021 7:19 PM, lironh@marvell.com wrote:
>>> From: Liron Himi <lironh@marvell.com>
>>>
>>> Currently the HW is configured with only one pool which its buffer
>>> size may be larger than the rx-fifo-size.
>>> In that situation, frame size larger than the fifo-size is gets
>>> dropped due to fifo overrun.
>>> this is cause because the HW works in cut-through mode which waits to
>>> have in the fifo at least the amount of bytes as define in the
>>> smallest pool's buffer size.
>>>
>>> This patch add a dummy pool which its buffer size is very small
>>> (smaller than 64B frame). this tricks the HW and any frame size is
>>> gets passed from the FIFO to the PP2.
>>>
>>> Signed-off-by: Liron Himi <lironh@marvell.com>
>>
>> so this is fixing the FIFO overrun, can you please provide the fixes line?
>> [L.H.] it is kind of combination of HW fifo size (which defined by kernel driver), given buffer size and incoming pkt size. I don't think I can point to a line in DPDK driver code that this patch is fixing.
>> it is a kind of WA for a HW issue.
>>
> 
> Is HW FIFO size or HW behavior (to wait at least smallest pool's buffer size) changed with recent kernel driver or MUSDK to cause this problem? If so can you please mention/reference that change in the commit log?
> [L.H.] I don't think it was related to a change. But this combination was just tested by our QA team.
> I think it may have more affect when the buffer size is of 9K which in some cases may exceed the fifo size of a specific port.
> 

OK, thanks for clarification.

>> And should this patch backported?
>> [L.H.] it cannot be backported as it depends on MUSDK api change.
>>
> 
> Is the fix or problem depends on the MUSDK API change? If the fix has a dependency will this be a problem, since it means latest driver won't be usable with old MUSDK version?
> Can you please clarify the dependency in the commit log?
> [L.H.] I already updated the doc that latest driver (including meson stuff) required new MUSDK version.
> 
> Thanks,
> ferruh
>
  

Patch

diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 12403666b..79e705497 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -90,6 +90,8 @@  static int used_bpools[PP2_NUM_PKT_PROC] = {
 static struct pp2_bpool *mrvl_port_to_bpool_lookup[RTE_MAX_ETHPORTS];
 static int mrvl_port_bpool_size[PP2_NUM_PKT_PROC][PP2_BPOOL_NUM_POOLS][RTE_MAX_LCORE];
 static uint64_t cookie_addr_high = MRVL_COOKIE_ADDR_INVALID;
+static int dummy_pool_id[PP2_NUM_PKT_PROC];
+struct pp2_bpool *dummy_pool[PP2_NUM_PKT_PROC] = {0};
 
 struct mrvl_ifnames {
 	const char *names[PP2_NUM_ETH_PPIO * PP2_NUM_PKT_PROC];
@@ -191,6 +193,52 @@  static struct {
 
 #define MRVL_NUM_XSTATS RTE_DIM(mrvl_xstats_tbl)
 
+static inline int
+mrvl_reserve_bit(int *bitmap, int max)
+{
+	int n = sizeof(*bitmap) * 8 - __builtin_clz(*bitmap);
+
+	if (n >= max)
+		return -1;
+
+	*bitmap |= 1 << n;
+
+	return n;
+}
+
+static int
+mrvl_pp2_fixup_init(void)
+{
+	struct pp2_bpool_params bpool_params;
+	char			name[15];
+	int			err, i;
+
+	memset(dummy_pool, 0, sizeof(dummy_pool));
+	for (i = 0; i < pp2_get_num_inst(); i++) {
+		dummy_pool_id[i] = mrvl_reserve_bit(&used_bpools[i],
+					     PP2_BPOOL_NUM_POOLS);
+		if (dummy_pool_id[i] < 0) {
+			MRVL_LOG(ERR, "Can't find free pool\n");
+			return -1;
+		}
+
+		memset(name, 0, sizeof(name));
+		snprintf(name, sizeof(name), "pool-%d:%d", i, dummy_pool_id[i]);
+		memset(&bpool_params, 0, sizeof(bpool_params));
+		bpool_params.match = name;
+		bpool_params.buff_len = MRVL_PKT_OFFS;
+		bpool_params.dummy_short_pool = 1;
+		err = pp2_bpool_init(&bpool_params, &dummy_pool[i]);
+		if (err != 0 || !dummy_pool[i]) {
+			MRVL_LOG(ERR, "BPool init failed!\n");
+			used_bpools[i] &= ~(1 << dummy_pool_id[i]);
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
 /**
  * Initialize packet processor.
  *
@@ -200,7 +248,8 @@  static struct {
 static int
 mrvl_init_pp2(void)
 {
-	struct pp2_init_params init_params;
+	struct pp2_init_params	init_params;
+	int			err;
 
 	memset(&init_params, 0, sizeof(init_params));
 	init_params.hif_reserved_map = MRVL_MUSDK_HIFS_RESERVED;
@@ -209,7 +258,32 @@  mrvl_init_pp2(void)
 	if (mrvl_cfg && mrvl_cfg->pp2_cfg.prs_udfs.num_udfs)
 		memcpy(&init_params.prs_udfs, &mrvl_cfg->pp2_cfg.prs_udfs,
 		       sizeof(struct pp2_parse_udfs));
-	return pp2_init(&init_params);
+	err = pp2_init(&init_params);
+	if (err != 0) {
+		MRVL_LOG(ERR, "PP2 init failed");
+		return -1;
+	}
+
+	err = mrvl_pp2_fixup_init();
+	if (err != 0) {
+		MRVL_LOG(ERR, "PP2 fixup init failed");
+		return -1;
+	}
+
+	return 0;
+}
+
+static void
+mrvl_pp2_fixup_deinit(void)
+{
+	int i;
+
+	for (i = 0; i < PP2_NUM_PKT_PROC; i++) {
+		if (!dummy_pool[i])
+			continue;
+		pp2_bpool_deinit(dummy_pool[i]);
+		used_bpools[i] &= ~(1 << dummy_pool_id[i]);
+	}
 }
 
 /**
@@ -221,6 +295,8 @@  mrvl_init_pp2(void)
 static void
 mrvl_deinit_pp2(void)
 {
+	mrvl_pp2_fixup_deinit();
+
 	pp2_deinit();
 }
 
@@ -261,19 +337,6 @@  mrvl_get_bpool_size(int pp2_id, int pool_id)
 	return size;
 }
 
-static inline int
-mrvl_reserve_bit(int *bitmap, int max)
-{
-	int n = sizeof(*bitmap) * 8 - __builtin_clz(*bitmap);
-
-	if (n >= max)
-		return -1;
-
-	*bitmap |= 1 << n;
-
-	return n;
-}
-
 static int
 mrvl_init_hif(int core_id)
 {
diff --git a/drivers/net/mvpp2/mrvl_ethdev.h b/drivers/net/mvpp2/mrvl_ethdev.h
index 27d34ecfe..b0cdddd15 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.h
+++ b/drivers/net/mvpp2/mrvl_ethdev.h
@@ -196,6 +196,8 @@  extern int mrvl_logtype;
 	rte_log(RTE_LOG_ ## level, mrvl_logtype, "%s(): " fmt "\n", \
 		__func__, ##args)
 
+extern struct pp2_bpool *dummy_pool[PP2_NUM_PKT_PROC];
+
 /**
  * Convert string to uint32_t with extra checks for result correctness.
  *
diff --git a/drivers/net/mvpp2/mrvl_qos.c b/drivers/net/mvpp2/mrvl_qos.c
index 310fd7384..a3add540b 100644
--- a/drivers/net/mvpp2/mrvl_qos.c
+++ b/drivers/net/mvpp2/mrvl_qos.c
@@ -881,6 +881,7 @@  setup_tc(struct pp2_ppio_tc_params *param, uint8_t inqs,
 
 	param->pkt_offset = MRVL_PKT_OFFS;
 	param->pools[0][0] = bpool;
+	param->pools[0][1] = dummy_pool[bpool->pp2_id];
 	param->default_color = color;
 
 	inq_params = rte_zmalloc_socket("inq_params",