[PATCH 1/2] Memory Allocation: Fixes ms_idx jump (lookahead) during find_next_n() in fb_array library

Burakov, Anatoly anatoly.burakov at intel.com
Tue May 16 15:49:05 CEST 2023


Hi Vipin,

Fix looks good, took a while to understand what's going on, but it's a 
great find!

Reviewed-by: Anatoly Burakov <anatoly.burakov at intel.com>

Couple of comments below:

On 1/13/2023 1:08 PM, Vipin P R wrote:
> In the legacy mem mode, when the fb_array is being populated, if there are holes in between, the ms_idx could go backward and there will be an overlap of the region starting from the ms_idx returned later. i.e. it's being mapped to two different physical regions in PA space to a contiguous region in VA space. this would result in the allocator assuming that the memory is contiguous even though there is a hole in between. In legacy mem, allocator assumes that PA contiguous are VA contiguous as well.

Technically, while this bug is indeed triggered in legacy mode, it does 
not in any way depend on it, so I would suggest avoid mentioning it 
explicitly.

Suggested commit message rewording:

fbarray: fix incorrect lookahead behavior

Currently, whenever last bit of current index mask is set (meaning, 
there is potentially a run starting at the end of the mask), lookahead 
loop is entered. In that loop, if the first bit of lookahead index is 
not set, the lookahead is stopped, and the current lookahead mask index 
is assigned to current index mask. However, because at that point we are 
inside a for-loop that increments current index mask after each 
iteration, this results in additional mask index increment.

Fix it to leave current index mask at `lookahead - 1`.

Fixes: c44d09811b40 ("eal: add shared indexed file-backed array")
Cc: anatoly.burakov at intel.com

> 
> Cc: stable at dpdk.org
> 
> Signed-off-by: Vipin P R <vipinp at vmware.com>
> Acked-by: Kumara Parameshwaran <kparameshwar at vmware.com>
> ---
>   .mailmap                            | 1 +
>   lib/eal/common/eal_common_fbarray.c | 2 +-
>   2 files changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/.mailmap b/.mailmap
> index 75884b6..3707bf5 100644
> --- a/.mailmap
> +++ b/.mailmap
> @@ -1391,6 +1391,7 @@ Vincent Guo <guopengfei160 at 163.com>
>   Vincent Jardin <vincent.jardin at 6wind.com>
>   Vincent Li <vincent.mc.li at gmail.com>
>   Vincent S. Cojot <vcojot at redhat.com>
> +Vipin P R <vipinp at vmware.com> <vipinpadmamramesh at gmail.com>
>   Vipin Varghese <vipin.varghese at amd.com> <vipin.varghese at intel.com>
>   Vipul Ashri <vipul.ashri at oracle.com>
>   Vishal Kulkarni <vishal at chelsio.com>
> diff --git a/lib/eal/common/eal_common_fbarray.c b/lib/eal/common/eal_common_fbarray.c
> index f11f879..551bd87 100644
> --- a/lib/eal/common/eal_common_fbarray.c
> +++ b/lib/eal/common/eal_common_fbarray.c
> @@ -236,7 +236,7 @@ find_next_n(const struct rte_fbarray *arr, unsigned int start, unsigned int n,
>   				 * as well, so skip that on next iteration.
>   				 */
>   				ignore_msk = ~((1ULL << need) - 1);
> -				msk_idx = lookahead_idx;
> +				msk_idx = lookahead_idx - 1;
>   				break;
>   			}
>   

-- 
Thanks,
Anatoly



More information about the stable mailing list