patch 'net/mana: prevent values overflow returned from RDMA layer' has been queued to stable release 22.11.5

luca.boccassi at gmail.com luca.boccassi at gmail.com
Thu Mar 7 02:31:29 CET 2024


Hi,

FYI, your patch has been queued to stable release 22.11.5

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 03/09/24. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Queued patches are on a temporary branch at:
https://github.com/bluca/dpdk-stable

This queued commit can be viewed at:
https://github.com/bluca/dpdk-stable/commit/d3273299a7566f6396d4a49f2e2f4c9fa8a21636

Thanks.

Luca Boccassi

---
>From d3273299a7566f6396d4a49f2e2f4c9fa8a21636 Mon Sep 17 00:00:00 2001
From: Long Li <longli at microsoft.com>
Date: Thu, 18 Jan 2024 10:05:37 -0800
Subject: [PATCH] net/mana: prevent values overflow returned from RDMA layer

[ upstream commit b7e79896c47e9b884eaed1bac439f84250fd959f ]

The device capabilities reported from RDMA layer are in int. Those
values can overflow with the data types defined in dev_info_get().

Fix this by doing a upper bound before returning those values.

Fixes: 517ed6e2d590 ("net/mana: add basic driver with build environment")

Signed-off-by: Long Li <longli at microsoft.com>
---
 drivers/net/mana/mana.c | 24 ++++++++++++++----------
 1 file changed, 14 insertions(+), 10 deletions(-)

diff --git a/drivers/net/mana/mana.c b/drivers/net/mana/mana.c
index 896b53ed35..eb3b734949 100644
--- a/drivers/net/mana/mana.c
+++ b/drivers/net/mana/mana.c
@@ -292,8 +292,8 @@ mana_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->min_rx_bufsize = MIN_RX_BUF_SIZE;
 	dev_info->max_rx_pktlen = MAX_FRAME_SIZE;
 
-	dev_info->max_rx_queues = priv->max_rx_queues;
-	dev_info->max_tx_queues = priv->max_tx_queues;
+	dev_info->max_rx_queues = RTE_MIN(priv->max_rx_queues, UINT16_MAX);
+	dev_info->max_tx_queues = RTE_MIN(priv->max_tx_queues, UINT16_MAX);
 
 	dev_info->max_mac_addrs = MANA_MAX_MAC_ADDR;
 	dev_info->max_hash_mac_addrs = 0;
@@ -334,16 +334,20 @@ mana_dev_info_get(struct rte_eth_dev *dev,
 
 	/* Buffer limits */
 	dev_info->rx_desc_lim.nb_min = MIN_BUFFERS_PER_QUEUE;
-	dev_info->rx_desc_lim.nb_max = priv->max_rx_desc;
+	dev_info->rx_desc_lim.nb_max = RTE_MIN(priv->max_rx_desc, UINT16_MAX);
 	dev_info->rx_desc_lim.nb_align = MIN_BUFFERS_PER_QUEUE;
-	dev_info->rx_desc_lim.nb_seg_max = priv->max_recv_sge;
-	dev_info->rx_desc_lim.nb_mtu_seg_max = priv->max_recv_sge;
+	dev_info->rx_desc_lim.nb_seg_max =
+		RTE_MIN(priv->max_recv_sge, UINT16_MAX);
+	dev_info->rx_desc_lim.nb_mtu_seg_max =
+		RTE_MIN(priv->max_recv_sge, UINT16_MAX);
 
 	dev_info->tx_desc_lim.nb_min = MIN_BUFFERS_PER_QUEUE;
-	dev_info->tx_desc_lim.nb_max = priv->max_tx_desc;
+	dev_info->tx_desc_lim.nb_max = RTE_MIN(priv->max_tx_desc, UINT16_MAX);
 	dev_info->tx_desc_lim.nb_align = MIN_BUFFERS_PER_QUEUE;
-	dev_info->tx_desc_lim.nb_seg_max = priv->max_send_sge;
-	dev_info->rx_desc_lim.nb_mtu_seg_max = priv->max_recv_sge;
+	dev_info->tx_desc_lim.nb_seg_max =
+		RTE_MIN(priv->max_send_sge, UINT16_MAX);
+	dev_info->tx_desc_lim.nb_mtu_seg_max =
+		RTE_MIN(priv->max_send_sge, UINT16_MAX);
 
 	/* Speed */
 	dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
@@ -1290,9 +1294,9 @@ mana_probe_port(struct ibv_device *ibdev, struct ibv_device_attr_ex *dev_attr,
 	priv->max_mr = dev_attr->orig_attr.max_mr;
 	priv->max_mr_size = dev_attr->orig_attr.max_mr_size;
 
-	DRV_LOG(INFO, "dev %s max queues %d desc %d sge %d",
+	DRV_LOG(INFO, "dev %s max queues %d desc %d sge %d mr %" PRIu64,
 		name, priv->max_rx_queues, priv->max_rx_desc,
-		priv->max_send_sge);
+		priv->max_send_sge, priv->max_mr_size);
 
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
-- 
2.39.2

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2024-03-07 01:05:40.937231526 +0000
+++ 0072-net-mana-prevent-values-overflow-returned-from-RDMA-.patch	2024-03-07 01:05:34.894942054 +0000
@@ -1 +1 @@
-From b7e79896c47e9b884eaed1bac439f84250fd959f Mon Sep 17 00:00:00 2001
+From d3273299a7566f6396d4a49f2e2f4c9fa8a21636 Mon Sep 17 00:00:00 2001
@@ -5,0 +6,2 @@
+[ upstream commit b7e79896c47e9b884eaed1bac439f84250fd959f ]
+
@@ -12 +13,0 @@
-Cc: stable at dpdk.org
@@ -20 +21 @@
-index 781ed76139..58257c971e 100644
+index 896b53ed35..eb3b734949 100644
@@ -23 +24 @@
-@@ -296,8 +296,8 @@ mana_dev_info_get(struct rte_eth_dev *dev,
+@@ -292,8 +292,8 @@ mana_dev_info_get(struct rte_eth_dev *dev,
@@ -25 +26 @@
- 	dev_info->max_rx_pktlen = MANA_MAX_MTU + RTE_ETHER_HDR_LEN;
+ 	dev_info->max_rx_pktlen = MAX_FRAME_SIZE;
@@ -34 +35 @@
-@@ -338,16 +338,20 @@ mana_dev_info_get(struct rte_eth_dev *dev,
+@@ -334,16 +334,20 @@ mana_dev_info_get(struct rte_eth_dev *dev,
@@ -61 +62 @@
-@@ -1385,9 +1389,9 @@ mana_probe_port(struct ibv_device *ibdev, struct ibv_device_attr_ex *dev_attr,
+@@ -1290,9 +1294,9 @@ mana_probe_port(struct ibv_device *ibdev, struct ibv_device_attr_ex *dev_attr,


More information about the stable mailing list