[dpdk-dev] [PATCH 17/41] eal: enable memory hotplug support in rte_malloc
Olivier Matz
olivier.matz at 6wind.com
Mon Mar 19 18:46:11 CET 2018
On Sat, Mar 03, 2018 at 01:46:05PM +0000, Anatoly Burakov wrote:
> This set of changes enables rte_malloc to allocate and free memory
> as needed. The way it works is, first malloc checks if there is
> enough memory already allocated to satisfy user's request. If there
> isn't, we try and allocate more memory. The reverse happens with
> free - we free an element, check its size (including free element
> merging due to adjacency) and see if it's bigger than hugepage
> size and that its start and end span a hugepage or more. Then we
> remove the area from malloc heap (adjusting element lengths where
> appropriate), and deallocate the page.
>
> For legacy mode, runtime alloc/free of pages is disabled.
>
> It is worth noting that memseg lists are being sorted by page size,
> and that we try our best to satisfy user's request. That is, if
> the user requests an element from a 2MB page memory, we will check
> if we can satisfy that request from existing memory, if not we try
> and allocate more 2MB pages. If that fails and user also specified
> a "size is hint" flag, we then check other page sizes and try to
> allocate from there. If that fails too, then, depending on flags,
> we may try allocating from other sockets. In other words, we try
> our best to give the user what they asked for, but going to other
> sockets is last resort - first we try to allocate more memory on
> the same socket.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov at intel.com>
[...]
> @@ -123,48 +125,356 @@ find_suitable_element(struct malloc_heap *heap, size_t size,
> * scan fails. Once the new memseg is added, it re-scans and should return
> * the new element after releasing the lock.
> */
> -void *
> -malloc_heap_alloc(struct malloc_heap *heap,
> - const char *type __attribute__((unused)), size_t size, unsigned flags,
> - size_t align, size_t bound)
> +static void *
> +heap_alloc(struct malloc_heap *heap, const char *type __rte_unused, size_t size,
> + unsigned int flags, size_t align, size_t bound)
> {
> struct malloc_elem *elem;
>
> size = RTE_CACHE_LINE_ROUNDUP(size);
> align = RTE_CACHE_LINE_ROUNDUP(align);
>
> - rte_spinlock_lock(&heap->lock);
> -
> elem = find_suitable_element(heap, size, flags, align, bound);
> if (elem != NULL) {
> elem = malloc_elem_alloc(elem, size, align, bound);
> +
> /* increase heap's count of allocated elements */
> heap->alloc_count++;
> }
> - rte_spinlock_unlock(&heap->lock);
>
> return elem == NULL ? NULL : (void *)(&elem[1]);
> }
The comment on top of the function says "after releasing the lock" but
it seems it's not relevant anymore because the lock is removed.
[...]
> int
> malloc_heap_free(struct malloc_elem *elem)
> {
> struct malloc_heap *heap;
> - struct malloc_elem *ret;
> + void *start, *aligned_start, *end, *aligned_end;
> + size_t len, aligned_len;
> + struct rte_memseg_list *msl;
> + int n_pages, page_idx, max_page_idx, ret;
>
> if (!malloc_elem_cookies_ok(elem) || elem->state != ELEM_BUSY)
> return -1;
>
> /* elem may be merged with previous element, so keep heap address */
> heap = elem->heap;
> + msl = elem->msl;
>
> rte_spinlock_lock(&(heap->lock));
>
> - ret = malloc_elem_free(elem);
> + elem = malloc_elem_free(elem);
>
> - rte_spinlock_unlock(&(heap->lock));
> + /* anything after this is a bonus */
> + ret = 0;
> +
The fact that there was previously 2 rte_spinlock_unlock() calls
looks strange to me. Is there something wrong in a previous patch?
More information about the dev
mailing list