[PATCH 2/3] mem: fix ASan shadow for remapped memory segments

David Marchand david.marchand at redhat.com
Tue Apr 26 16:15:07 CEST 2022


On Tue, Apr 26, 2022 at 2:54 PM Burakov, Anatoly
<anatoly.burakov at intel.com> wrote:
> >> @@ -1040,9 +1040,25 @@ malloc_heap_free(struct malloc_elem *elem)
> >>
> >>          rte_mcfg_mem_write_unlock();
> >>   free_unlock:
> >> -       /* Poison memory range if belonging to some still mapped
> >> pages. */
> >> -       if (!unmapped_pages)
> >> +       if (!unmapped_pages) {
> >>                  asan_set_freezone(asan_ptr, asan_data_len);
> >> +       } else {
> >> +               /*
> >> +                * We may be in a situation where we unmapped pages
> >> like this:
> >> +                * malloc header | free space | unmapped space | free
> >> space | malloc header
> >> +                */
> >> +               void *free1_start = asan_ptr;
> >> +               void *free1_end = aligned_start;
> >> +               void *free2_start = RTE_PTR_ADD(aligned_start,
> >> aligned_len);
> >> +               void *free2_end = RTE_PTR_ADD(asan_ptr, asan_data_len);
> >> +
> >> +               if (free1_start < free1_end)
> >> +                       asan_set_freezone(free1_start,
> >> +                               RTE_PTR_DIFF(free1_end, free1_start));
> >> +               if (free2_start < free2_end)
> >> +                       asan_set_freezone(free2_start,
> >> +                               RTE_PTR_DIFF(free2_end, free2_start));
> >> +       }
> >>
> >>          rte_spinlock_unlock(&(heap->lock));
> >>          return ret;
> >>
> >
> > Something like that, yes. I will have to think through this a bit more,
> > especially in light of your func_reentrancy splat :)
> >
>
> So, the reason splat in func_reentrancy test happens is as follows: the
> above patch is sorta correct (i have a different one but does the same
> thing), but incomplete. What happens then is when we add new memory, we
> are integrating it into our existing malloc heap, which triggers
> `malloc_elem_join_adjacent_free()` which will trigger a write into old
> header space being merged, which may be marked as "freed". So, again we
> are hit with our internal allocator messing with ASan.

I ended up with the same conclusion.
Thanks for confirming.


>
> To properly fix this is to answer the following question: what is the
> goal of having ASan support in DPDK? Is it there to catch bugs *in the
> allocator*, or can we just trust that our allocator code is correct, and
> only concern ourselves with user-allocated areas of the code? Because it

The best would be to handle both.
I don't think clang disables ASan for the instrumentations on malloc.


> seems like the best way to address this issue would be to just avoid
> triggering ASan checks for certain allocator-internal actions: this way,
> we don't need to care what allocator itself does, just what user code
> does. As in, IIRC there was a compiler attribute that disables ASan
> checks for a specific function: perhaps we could just wrap certain
> access in that and be done with it?
>
> What do you think?

It is tempting because it is the easiest way to avoid the issue.
Though, by waiving those checks in the allocator, does it leave the
ASan shadow in a consistent state?


-- 
David Marchand



More information about the stable mailing list