[dpdk-stable] [dpdk-dev] [PATCH 2/2] net/bonding: fix oob access in "other" aggregator modes

Chas Williams 3chas3 at gmail.com
Sun Mar 24 14:35:29 CET 2019


Have you ever experienced this problem in practice? I ask because I am 
considering some fixes that would limit the number of slaves to a more 
reasonable number (and reduce the over stack usage of the bonding driver 
in general).

On 3/21/19 4:28 PM, David Marchand wrote:
> From: Zhaohui <zhaohui8 at huawei.com>
> 
> slave aggregator_port_id is in [0, RTE_MAX_ETHPORTS-1] range.
> If RTE_MAX_ETHPORTS is > 8, we can hit out of bound accesses on
> agg_bandwidth[] and agg_count[] arrays.
> 
> Fixes: 6d72657ce379 ("net/bonding: add other aggregator modes")
> Cc: stable at dpdk.org
> 
> Signed-off-by: Zhaohui <zhaohui8 at huawei.com>
> Signed-off-by: David Marchand <david.marchand at redhat.com>

Acked-by: Chas Williams <chas3 at att.com>

> ---
>   drivers/net/bonding/rte_eth_bond_8023ad.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
> index 3943ec1..5004898 100644
> --- a/drivers/net/bonding/rte_eth_bond_8023ad.c
> +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
> @@ -669,8 +669,8 @@
>   	struct port *agg, *port;
>   	uint16_t slaves_count, new_agg_id, i, j = 0;
>   	uint16_t *slaves;
> -	uint64_t agg_bandwidth[8] = {0};
> -	uint64_t agg_count[8] = {0};
> +	uint64_t agg_bandwidth[RTE_MAX_ETHPORTS] = {0};
> +	uint64_t agg_count[RTE_MAX_ETHPORTS] = {0};
>   	uint16_t default_slave = 0;
>   	uint16_t mode_count_id;
>   	uint16_t mode_band_id;
> 


More information about the stable mailing list