[dpdk-dev,v3,1/4] vmxnet3: Avoid memory leak in vmxnet3_dev_rx_queue_setup.

Message ID BY2PR05MB235953D3D13114C9CD89C3FCAF7F0@BY2PR05MB2359.namprd05.prod.outlook.com (mailing list archive)
State Not Applicable, archived
Headers

Checks

Context Check Description
ci/Intel compilation success Compilation OK

Commit Message

Yong Wang Jan. 18, 2017, 2:05 a.m. UTC
  Any downside with free/reallocation now that memzone can be freed?  Allocation with max ring size should work but is kind of wasteful in terms of memory usage and I assume this type of ring size change should not be a frequent operation.

From: nickcooper-zhangtonghao [mailto:nic@opencloud.tech]

Sent: Tuesday, January 17, 2017 5:37 PM
To: Yong Wang <yongwang@vmware.com>
Cc: ferruh.yigit@intel.com; dev@dpdk.org
Subject: Re: [PATCH v3 1/4] vmxnet3: Avoid memory leak in vmxnet3_dev_rx_queue_setup.


On Jan 18, 2017, at 4:15 AM, Yong Wang <yongwang@vmware.com<mailto:yongwang@vmware.com>> wrote:

-----Original Message-----
From: Nick Zhang [mailto:nic@opencloud.tech]

Sent: Sunday, January 8, 2017 7:00 PM
To: Yong Wang <yongwang@vmware.com<mailto:yongwang@vmware.com>>
Cc: ferruh.yigit@intel.com<mailto:ferruh.yigit@intel.com>; dev@dpdk.org<mailto:dev@dpdk.org>; Nick Zhang <nic@opencloud.tech<mailto:nic@opencloud.tech>>
Subject: [PATCH v3 1/4] vmxnet3: Avoid memory leak in
vmxnet3_dev_rx_queue_setup.

This patch will check the "nb_desc" parameter for rx queue.
Rx vmxnet rings length should be between 128-4096.
The patch will release the rxq and re-allocation it soon
for different "nb_desc".

Signed-off-by: Nick Zhang <nic@opencloud.tech<mailto:nic@opencloud.tech>>

---
drivers/net/vmxnet3/vmxnet3_rxtx.c | 30 ++++++++++++++++++------------
1 file changed, 18 insertions(+), 12 deletions(-)
  

Comments

nickcooper-zhangtonghao Jan. 18, 2017, 2:41 a.m. UTC | #1
> On Jan 18, 2017, at 10:05 AM, Yong Wang <yongwang@vmware.com> wrote:
> 
> Any downside with free/reallocation now that memzone can be freed?  Allocation with max ring size should work but is kind of wasteful in terms of memory usage and I assume this type of ring size change should not be a frequent operation.


Not free/realloc them anymore. I guess it is necessary to use the
allocation with max ring size. The seg fault really bother us.
and the app (e.g. ovs) may change it frequently when tuning performance.
  

Patch

diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c
b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index b109168..e77374f 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -926,6 +926,21 @@ 

     PMD_INIT_FUNC_TRACE();

+    /* Rx vmxnet rings length should be between 128-4096 */
+    if (nb_desc < VMXNET3_DEF_RX_RING_SIZE) {
+         PMD_INIT_LOG(ERR, "VMXNET3 Rx Ring Size Min: 128");
+         return -EINVAL;
+    } else if (nb_desc > VMXNET3_RX_RING_MAX_SIZE) {
+         PMD_INIT_LOG(ERR, "VMXNET3 Rx Ring Size Max: 4096");
+         return -EINVAL;
+    }
+
+    /* Free memory prior to re-allocation if needed. */
+    if (dev->data->rx_queues[queue_idx] != NULL) {
+    vmxnet3_dev_rx_queue_release(dev->data->rx_queues[queue_idx]);

Currently vmxnet3_dev_rx_queue_release() does not free device ring memory.  As a result, the same device ring memory allocated based on the previous descriptor size will be used and that should also explain why you are observing seg fault with an increased ring size. If you handle the device ring memory free in vmxnet3_dev_rx_queue_release(), I think the pre-allocation of ring with max size will not be needed any more.

Yes, we should not free the pre-allocation of ring, but alloc only once ring with max size. I will submit v4.
Thank you.