[PATCH 2/3] mem: fix ASan shadow for remapped memory segments

Burakov, Anatoly anatoly.burakov at intel.com
Wed Apr 27 17:32:03 CEST 2022


On 26-Apr-22 5:07 PM, Burakov, Anatoly wrote:
> On 26-Apr-22 3:15 PM, David Marchand wrote:
>> On Tue, Apr 26, 2022 at 2:54 PM Burakov, Anatoly
>> <anatoly.burakov at intel.com> wrote:
>>>>> @@ -1040,9 +1040,25 @@ malloc_heap_free(struct malloc_elem *elem)
>>>>>
>>>>>           rte_mcfg_mem_write_unlock();
>>>>>    free_unlock:
>>>>> -       /* Poison memory range if belonging to some still mapped
>>>>> pages. */
>>>>> -       if (!unmapped_pages)
>>>>> +       if (!unmapped_pages) {
>>>>>                   asan_set_freezone(asan_ptr, asan_data_len);
>>>>> +       } else {
>>>>> +               /*
>>>>> +                * We may be in a situation where we unmapped pages
>>>>> like this:
>>>>> +                * malloc header | free space | unmapped space | free
>>>>> space | malloc header
>>>>> +                */
>>>>> +               void *free1_start = asan_ptr;
>>>>> +               void *free1_end = aligned_start;
>>>>> +               void *free2_start = RTE_PTR_ADD(aligned_start,
>>>>> aligned_len);
>>>>> +               void *free2_end = RTE_PTR_ADD(asan_ptr, 
>>>>> asan_data_len);
>>>>> +
>>>>> +               if (free1_start < free1_end)
>>>>> +                       asan_set_freezone(free1_start,
>>>>> +                               RTE_PTR_DIFF(free1_end, free1_start));
>>>>> +               if (free2_start < free2_end)
>>>>> +                       asan_set_freezone(free2_start,
>>>>> +                               RTE_PTR_DIFF(free2_end, free2_start));
>>>>> +       }
>>>>>
>>>>>           rte_spinlock_unlock(&(heap->lock));
>>>>>           return ret;
>>>>>
>>>>
>>>> Something like that, yes. I will have to think through this a bit more,
>>>> especially in light of your func_reentrancy splat :)
>>>>
>>>
>>> So, the reason splat in func_reentrancy test happens is as follows: the
>>> above patch is sorta correct (i have a different one but does the same
>>> thing), but incomplete. What happens then is when we add new memory, we
>>> are integrating it into our existing malloc heap, which triggers
>>> `malloc_elem_join_adjacent_free()` which will trigger a write into old
>>> header space being merged, which may be marked as "freed". So, again we
>>> are hit with our internal allocator messing with ASan.
>>
>> I ended up with the same conclusion.
>> Thanks for confirming.
>>
>>
>>>
>>> To properly fix this is to answer the following question: what is the
>>> goal of having ASan support in DPDK? Is it there to catch bugs *in the
>>> allocator*, or can we just trust that our allocator code is correct, and
>>> only concern ourselves with user-allocated areas of the code? Because it
>>
>> The best would be to handle both.
>> I don't think clang disables ASan for the instrumentations on malloc.
> 
> I've actually prototyped these changes a bit. We use memset in a few 
> places, and that one can't be disabled as far as i can tell (not without 
> blacklisting memset for entire DPDK).
> 
>>
>>
>>> seems like the best way to address this issue would be to just avoid
>>> triggering ASan checks for certain allocator-internal actions: this way,
>>> we don't need to care what allocator itself does, just what user code
>>> does. As in, IIRC there was a compiler attribute that disables ASan
>>> checks for a specific function: perhaps we could just wrap certain
>>> access in that and be done with it?
>>>
>>> What do you think?
>>
>> It is tempting because it is the easiest way to avoid the issue.
>> Though, by waiving those checks in the allocator, does it leave the
>> ASan shadow in a consistent state?
>>
> 
> The "consistent state" is kinda difficult to achieve because there is no 
> "default" state for memory - sometimes it comes as available (0x00), 
> sometimes it is marked as already freed (0xFF). So, coming into a malloc 
> function, we don't know whether the memory we're about to mess with is 
> 0x00 or 0xFF.
> 
> What we could do is mark every malloc header with 0xFF regardless of its 
> status, and leave the rest to "regular" zoning. This would be strange 
> from ASan's point of view (because we're marking memory as "freed" when 
> it wasn't ever allocated), but at least this would be consistent :D
> 

I've been prototyping a solution for this, but I keep bumping into our 
dual usage of ASan: ASan doesn't differentiate between 
allocator-internal accesses, and user code accesses. Therefore, we can't 
either, so either we start marking areas as "accessible" even when they 
shouldn't be (such as unallocated areas that correspond to malloc 
headers), or we only use ASan to mark user-available areas and forego 
its usage inside the allocator entirely.

Right now, the best I can think of is the combination of approaches 
discussed earlier: that is, we mark all malloc element header areas as 
"available" unconditionally (thereby sacrificing part of the protection 
ASan provides us - because we can't prevent ASan from complaining about 
accesses from inside the allocator without losing our ability to detect 
cases where user accidentally accesses a malloc element), and we also 
mark unmapped memory as "available" (because writing to it will trigger 
a fault anyway).

I haven't yet figured out the cleanest solution (we miss asan zoning for 
headers somewhere), but at least i got func reentrancy test to pass :D

-- 
Thanks,
Anatoly


More information about the stable mailing list