#
b9289d76 |
| 12-Jun-2024 |
Nicholas Piggin <npiggin@gmail.com> |
common/sieve: Support machines without MMU
Not all powerpc CPUs provide MMU support. Define vm_available() that is true by default but archs can override it. Use this to run VM tests.
Reviewed-by:
common/sieve: Support machines without MMU
Not all powerpc CPUs provide MMU support. Define vm_available() that is true by default but archs can override it. Use this to run VM tests.
Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Andrew Jones <andrew.jones@linux.dev> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Message-ID: <20240612052322.218726-6-npiggin@gmail.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
show more ...
|
#
0a5b31d2 |
| 22-Jun-2021 |
Sean Christopherson <seanjc@google.com> |
lib/vmalloc: Let arch code pass a value to its setup_mmu() helper
Add an inner __setup_vm() that takes an opaque param and passes said param along to setup_mmu(). x86 will use the param to configur
lib/vmalloc: Let arch code pass a value to its setup_mmu() helper
Add an inner __setup_vm() that takes an opaque param and passes said param along to setup_mmu(). x86 will use the param to configure its page tables for kernel vs. user so that tests that want to enable SMEP (fault if kernel executes user page) can do so without resorting to hacks and without breaking tests that need user pages, i.e. that run user code.
Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210622210047.3691840-10-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
7e3e823b |
| 15-Jan-2021 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc.h: remove align_min from struct alloc_ops
Remove align_min from struct alloc_ops, since it is no longer used.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Krish S
lib/alloc.h: remove align_min from struct alloc_ops
Remove align_min from struct alloc_ops, since it is no longer used.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Message-Id: <20210115123730.381612-7-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
12513346 |
| 15-Jan-2021 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/vmalloc: add some asserts and improvements
Add some asserts to make sure the state is consistent.
Simplify and improve the readability of vm_free.
If a NULL pointer is freed, no operation is p
lib/vmalloc: add some asserts and improvements
Add some asserts to make sure the state is consistent.
Simplify and improve the readability of vm_free.
If a NULL pointer is freed, no operation is performed.
Fixes: 3f6fee0d4da4 ("lib/vmalloc: vmalloc support for handling allocation metadata")
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20210115123730.381612-4-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
f90ddba3 |
| 02-Oct-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc: simplify free and malloc
Remove the size parameter from the various free functions
Since the backends can handle the allocation sizes on their own, simplify the generic malloc wrappers.
lib/alloc: simplify free and malloc
Remove the size parameter from the various free functions
Since the backends can handle the allocation sizes on their own, simplify the generic malloc wrappers.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20201002154420.292134-6-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
8131e91a |
| 02-Oct-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc_page: complete rewrite of the page allocator
This is a complete rewrite of the page allocator.
This will bring a few improvements: * no need to specify the size when freeing * allocate sm
lib/alloc_page: complete rewrite of the page allocator
This is a complete rewrite of the page allocator.
This will bring a few improvements: * no need to specify the size when freeing * allocate small areas with a large alignment without wasting memory * ability to initialize and use multiple memory areas (e.g. DMA) * more sanity checks
A few things have changed: * initialization cannot be done with free_pages like before, page_alloc_init_area has to be used instead
Arch-specific changes: * s390x now uses the area below 2GiB for SMP lowcore initialization.
Details: Each memory area has metadata at the very beginning. The metadata is a byte array with one entry per usable page (so, excluding the metadata itself). Each entry indicates if the page is special (unused for now), if it is allocated, and the order of the block. Both free and allocated pages are part of larger blocks.
Some more fixed size metadata is present in a fixed-size static array. This metadata contains start and end page frame numbers, the pointer to the metadata array, and the array of freelists. The array of freelists has an entry for each possible order (indicated by the macro NLISTS, defined as BITS_PER_LONG - PAGE_SHIFT).
On allocation, if the free list for the needed size is empty, larger blocks are split. When a small allocation with a large alignment is requested, an appropriately large block is split, to guarantee the alignment.
When a block is freed, an attempt will be made to merge it into the neighbour, iterating the process as long as possible.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20201002154420.292134-5-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
3f6fee0d |
| 02-Oct-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/vmalloc: vmalloc support for handling allocation metadata
Add allocation metadata handling to vmalloc.
In upcoming patches, allocation metadata will have to be handled directly bt the lower lev
lib/vmalloc: vmalloc support for handling allocation metadata
Add allocation metadata handling to vmalloc.
In upcoming patches, allocation metadata will have to be handled directly bt the lower level allocators, and will not be handled by the common wrapper.
In this patch, the number of allocated pages plus a magic value are written immediately before the returned pointer. This means that multi page allocations will allocate one extra page (which is not worse than what the current allocator does).
For small allocations there is an optimization: the returned address is intentionally not page-aligned. This signals that the allocation spanned one page only. In this case the metadata is only the magic value, and it is also saved immediately before the returned pointer. Since the pointer does not point to the begininng of the page, there is always space in the same page for the magic value.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20201002154420.292134-3-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
0d622fcb |
| 06-Jul-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/vmalloc: allow vm_memalign with alignment > PAGE_SIZE
Allow allocating aligned virtual memory with alignment larger than only one page.
Add a check that the backing pages were actually allocate
lib/vmalloc: allow vm_memalign with alignment > PAGE_SIZE
Allow allocating aligned virtual memory with alignment larger than only one page.
Add a check that the backing pages were actually allocated.
Export the alloc_vpages_aligned function to allow users to allocate non-backed aligned virtual addresses.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Message-Id: <20200706164324.81123-5-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
3874bb46 |
| 06-Jul-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/vmalloc: fix pages count local variable to be size_t
Since size is of type size_t, size >> PAGE_SHIFT might still be too big for a normal unsigned int.
Signed-off-by: Claudio Imbrenda <imbrenda
lib/vmalloc: fix pages count local variable to be size_t
Since size is of type size_t, size >> PAGE_SHIFT might still be too big for a normal unsigned int.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Jim Mattson <jmattson@google.com> Message-Id: <20200706164324.81123-2-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
17b9f93e |
| 22-Jun-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/vmalloc: add locking and a check for initialization
Make sure init_alloc_vpage is never called when vmalloc is in use.
Get both init_alloc_vpage and setup_vm to use the lock.
For setup_vm we o
lib/vmalloc: add locking and a check for initialization
Make sure init_alloc_vpage is never called when vmalloc is in use.
Get both init_alloc_vpage and setup_vm to use the lock.
For setup_vm we only check at the end because at least on some architectures setup_mmu can call init_alloc_vpage, which would cause a deadlock.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20200622162141.279716-9-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
4aabe7c0 |
| 22-Jun-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/vmalloc: fix potential race and non-standard pointer arithmetic
The pointer vfree_top should only be accessed with the lock held, so make sure we return a local copy of the pointer taken safely
lib/vmalloc: fix potential race and non-standard pointer arithmetic
The pointer vfree_top should only be accessed with the lock held, so make sure we return a local copy of the pointer taken safely inside the lock.
Also avoid doing pointer arithmetic on void pointers. Gcc allows it but it is ugly. Use uintptr_t for doing maths on the pointer.
This will also come useful in upcoming patches.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20200622162141.279716-7-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
6ea7326a |
| 22-Jun-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib: use PAGE_ALIGN
Since now PAGE_ALIGN is available in all architectures, start using it in common code to improve readability.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id
lib: use PAGE_ALIGN
Since now PAGE_ALIGN is available in all architectures, start using it in common code to improve readability.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20200622162141.279716-4-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
48a0145f |
| 10-Oct-2019 |
Paolo Bonzini <pbonzini@redhat.com> |
x86: allow using memory above 4 GiB
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
#
bf62a925 |
| 17-Jan-2018 |
Andrew Jones <drjones@redhat.com> |
page_alloc: allow initialization before setup_vm call
Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
|
#
dcda215b |
| 23-Oct-2017 |
Paolo Bonzini <pbonzini@redhat.com> |
vmalloc: convert to alloc_ops
The final step moves the vmalloc and vmap implementation to generic code, rewriting vmalloc and vfree as an alloc_ops implementation that is installed by setup_vm.
Thi
vmalloc: convert to alloc_ops
The final step moves the vmalloc and vmap implementation to generic code, rewriting vmalloc and vfree as an alloc_ops implementation that is installed by setup_vm.
This way all architectures can benefit from it just by calling setup_vm and providing the implementation of install_page.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
937e2392 |
| 23-Oct-2017 |
Paolo Bonzini <pbonzini@redhat.com> |
x86: use alloc_phys
As the next step in making vmalloc/vfree available to every other architecture, register the available memory with phys_alloc for x86 too, and add a service to return the unused
x86: use alloc_phys
As the next step in making vmalloc/vfree available to every other architecture, register the available memory with phys_alloc for x86 too, and add a service to return the unused phys_alloc region. This makes setup_vm architecture- independent.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
efd8e5aa |
| 22-Oct-2017 |
Paolo Bonzini <pbonzini@redhat.com> |
lib: start moving vmalloc to generic code
For now, vmalloc provides a primitive that allocates contiguous virtual address. Together with a page allocator that allocates single physical memory pages
lib: start moving vmalloc to generic code
For now, vmalloc provides a primitive that allocates contiguous virtual address. Together with a page allocator that allocates single physical memory pages, it will provide an implementation of alloc_ops for when an MMU is enabled.
Before doing that, however, we need to move the page allocator and give lib/alloc.c's malloc feature parity with lib/x86/vm.c's vmalloc.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|