#
4a54e8a3 |
| 15-Jan-2021 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc_page: Properly handle requests for fresh blocks
Upon initialization, all memory in an area is marked as fresh. Once memory is used and freed, the freed memory is marked as free.
Free memo
lib/alloc_page: Properly handle requests for fresh blocks
Upon initialization, all memory in an area is marked as fresh. Once memory is used and freed, the freed memory is marked as free.
Free memory is always appended to the front of the freelist, meaning that fresh memory stays on the tail.
When a block of fresh memory is split, the two blocks are put on the tail of the appropriate freelist, so they can be found when needed.
When a fresh block is requested, a fresh block one order bigger is taken, the first half is put back in the free pool (on the tail), and the second half is returned. The reason behind this is that the first page of every block always contains the pointers of the freelist. Since the first page of a fresh block is actually not fresh, it cannot be returned when a fresh allocation is requested.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Message-Id: <20210115123730.381612-12-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
0f4f39bd |
| 15-Jan-2021 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc_page: Wire up FLAG_DONTZERO
Memory allocated without FLAG_DONTZERO will now be zeroed before being returned to the caller.
This means that by default all allocated memory is now zeroed, r
lib/alloc_page: Wire up FLAG_DONTZERO
Memory allocated without FLAG_DONTZERO will now be zeroed before being returned to the caller.
This means that by default all allocated memory is now zeroed, restoring the default behaviour that had been accidentally removed by a previous commit.
Fixes: 8131e91a4b61 ("lib/alloc_page: complete rewrite of the page allocator") Reported-by: Nadav Amit <nadav.amit@gmail.com>
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20210115123730.381612-11-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
550b4683 |
| 15-Jan-2021 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc: replace areas with more generic flags
Replace the areas parameter with a more generic flags parameter. This allows for up to 16 allocation areas and 16 allocation flags.
This patch intro
lib/alloc: replace areas with more generic flags
Replace the areas parameter with a more generic flags parameter. This allows for up to 16 allocation areas and 16 allocation flags.
This patch introduces the flags and changes the names of the funcions, subsequent patches will actually wire up the flags to do something.
The first two flags introduced are: - FLAG_DONTZERO to ask the allocated memory not to be zeroed - FLAG_FRESH to indicate that the allocated memory should have not been touched (READ or written to) in any way since boot.
This patch also fixes the order of arguments to consistently have alignment first and then size, thereby fixing a bug where the two values would get swapped.
Fixes: 8131e91a4b61 ("lib/alloc_page: complete rewrite of the page allocator")
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Message-Id: <20210115123730.381612-10-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
dbad82e3 |
| 15-Jan-2021 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc_page: rework metadata format
This patch changes the format of the metadata so that the metadata is now a 2-bit field instead of two separate flags.
This allows to have 4 different states
lib/alloc_page: rework metadata format
This patch changes the format of the metadata so that the metadata is now a 2-bit field instead of two separate flags.
This allows to have 4 different states for memory:
STATUS_FRESH: the memory is free and has not been touched at all since boot (not even read from!) STATUS_FREE: the memory is free, but it is probably not fresh any more STATUS_ALLOCATED: the memory has been allocated and is in use STATUS_SPECIAL: the memory has been removed from the pool of allocated memory for some kind of special purpose according to the needs of the caller
Some macros are also introduced to test the status of a specific metadata item.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Message-Id: <20210115123730.381612-9-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
bcffccfc |
| 15-Jan-2021 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc_page: Optimization to skip known empty freelists
Keep track of the largest block order available in each area, and do not search past it when looking for free memory.
This will avoid need
lib/alloc_page: Optimization to skip known empty freelists
Keep track of the largest block order available in each area, and do not search past it when looking for free memory.
This will avoid needlessly scanning the freelists for the largest block orders, which will be empty in most cases.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Message-Id: <20210115123730.381612-8-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
7e3e823b |
| 15-Jan-2021 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc.h: remove align_min from struct alloc_ops
Remove align_min from struct alloc_ops, since it is no longer used.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Krish S
lib/alloc.h: remove align_min from struct alloc_ops
Remove align_min from struct alloc_ops, since it is no longer used.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Message-Id: <20210115123730.381612-7-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
01d8952b |
| 15-Jan-2021 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc_page: fix and improve the page allocator
This patch introduces some improvements to the code, mostly readability improvements, but also some semantic details, and improvements in the docum
lib/alloc_page: fix and improve the page allocator
This patch introduces some improvements to the code, mostly readability improvements, but also some semantic details, and improvements in the documentation.
* introduce and use pfn_t to semantically tag parameters as PFNs * remove the PFN macro, use virt_to_pfn instead * rename area_or_metadata_contains and area_contains to area_contains_pfn and usable_area_contains_pfn respectively * fix/improve comments in lib/alloc_page.h * move some wrapper functions to the header
Fixes: 8131e91a4b61 ("lib/alloc_page: complete rewrite of the page allocator") Fixes: 34c950651861 ("lib/alloc_page: allow reserving arbitrary memory ranges")
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20210115123730.381612-6-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
322cdd64 |
| 15-Jan-2021 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/asm: Fix definitions of memory areas
Fix the definitions of the memory areas.
Bring the headers in line with the rest of the asm headers, by having the appropriate #ifdef _ASM$ARCH_ guarding th
lib/asm: Fix definitions of memory areas
Fix the definitions of the memory areas.
Bring the headers in line with the rest of the asm headers, by having the appropriate #ifdef _ASM$ARCH_ guarding the headers.
Fixes: d74708246bd9 ("lib/asm: Add definitions of memory areas")
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Message-Id: <20210115123730.381612-5-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
34c95065 |
| 02-Oct-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc_page: allow reserving arbitrary memory ranges
Two new functions are introduced, that allow specific memory ranges to be reserved and freed.
This is useful when a testcase needs memory at
lib/alloc_page: allow reserving arbitrary memory ranges
Two new functions are introduced, that allow specific memory ranges to be reserved and freed.
This is useful when a testcase needs memory at very specific addresses, with the guarantee that the page allocator will not touch those pages.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20201002154420.292134-8-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
f90ddba3 |
| 02-Oct-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc: simplify free and malloc
Remove the size parameter from the various free functions
Since the backends can handle the allocation sizes on their own, simplify the generic malloc wrappers.
lib/alloc: simplify free and malloc
Remove the size parameter from the various free functions
Since the backends can handle the allocation sizes on their own, simplify the generic malloc wrappers.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20201002154420.292134-6-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
8131e91a |
| 02-Oct-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc_page: complete rewrite of the page allocator
This is a complete rewrite of the page allocator.
This will bring a few improvements: * no need to specify the size when freeing * allocate sm
lib/alloc_page: complete rewrite of the page allocator
This is a complete rewrite of the page allocator.
This will bring a few improvements: * no need to specify the size when freeing * allocate small areas with a large alignment without wasting memory * ability to initialize and use multiple memory areas (e.g. DMA) * more sanity checks
A few things have changed: * initialization cannot be done with free_pages like before, page_alloc_init_area has to be used instead
Arch-specific changes: * s390x now uses the area below 2GiB for SMP lowcore initialization.
Details: Each memory area has metadata at the very beginning. The metadata is a byte array with one entry per usable page (so, excluding the metadata itself). Each entry indicates if the page is special (unused for now), if it is allocated, and the order of the block. Both free and allocated pages are part of larger blocks.
Some more fixed size metadata is present in a fixed-size static array. This metadata contains start and end page frame numbers, the pointer to the metadata array, and the array of freelists. The array of freelists has an entry for each possible order (indicated by the macro NLISTS, defined as BITS_PER_LONG - PAGE_SHIFT).
On allocation, if the free list for the needed size is empty, larger blocks are split. When a small allocation with a large alignment is requested, an appropriately large block is split, to guarantee the alignment.
When a block is freed, an attempt will be made to merge it into the neighbour, iterating the process as long as possible.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20201002154420.292134-5-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
62acda9d |
| 14-Jul-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc_page: Fix compilation issue on 32bit archs
The assert in lib/alloc_page is hardcoded to long.
Use the z modifier instead, which is meant to be used for size_t.
Fixes: 73f4b202beb39 ("lib
lib/alloc_page: Fix compilation issue on 32bit archs
The assert in lib/alloc_page is hardcoded to long.
Use the z modifier instead, which is meant to be used for size_t.
Fixes: 73f4b202beb39 ("lib/alloc_page: change some parameter types") Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20200714130030.56037-3-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
9e801bd9 |
| 06-Jul-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc_page: move get_order and is_power_of_2 to a bitops.h
The functions get_order and is_power_of_2 are simple and should probably be in a header, like similar simple functions in bitops.h
Sin
lib/alloc_page: move get_order and is_power_of_2 to a bitops.h
The functions get_order and is_power_of_2 are simple and should probably be in a header, like similar simple functions in bitops.h
Since they concern bit manipulation, the logical place for them is in bitops.h
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Message-Id: <20200706164324.81123-4-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
73f4b202 |
| 06-Jul-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc_page: change some parameter types
For size parameters, size_t is probably semantically more appropriate than unsigned long (although they map to the same value).
For order, unsigned long
lib/alloc_page: change some parameter types
For size parameters, size_t is probably semantically more appropriate than unsigned long (although they map to the same value).
For order, unsigned long is just too big. Also, get_order returns an unsigned int anyway.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Jim Mattson <jmattson@google.com> Message-Id: <20200706164324.81123-3-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
4a4f8af2 |
| 22-Jun-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc_page: make get_order return unsigned int
Since get_order never returns a negative value, it makes sense to make it return an unsigned int.
The returned value will be in practice always ve
lib/alloc_page: make get_order return unsigned int
Since get_order never returns a negative value, it makes sense to make it return an unsigned int.
The returned value will be in practice always very small, a u8 would probably also do the trick.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20200622162141.279716-8-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
f22e527d |
| 02-Apr-2020 |
Eric Auger <eric.auger@redhat.com> |
page_alloc: Introduce get_order()
Compute the power of 2 order of a size. Use it in page_memalign. Other users are looming.
Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Andrew J
page_alloc: Introduce get_order()
Compute the power of 2 order of a size. Use it in page_memalign. Other users are looming.
Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
show more ...
|
#
e9554497 |
| 14-Sep-2019 |
Marc Orr <marcorr@google.com> |
x86: nvmx: test max atomic switch MSRs
Excerise nested VMX's atomic MSR switch code (e.g., VM-entry MSR-load list) at the maximum number of MSRs supported, as described in the SDM, in the appendix c
x86: nvmx: test max atomic switch MSRs
Excerise nested VMX's atomic MSR switch code (e.g., VM-entry MSR-load list) at the maximum number of MSRs supported, as described in the SDM, in the appendix chapter titled "MISCELLANEOUS DATA".
Suggested-by: Jim Mattson <jmattson@google.com> Signed-off-by: Marc Orr <marcorr@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
bd5f4c8f |
| 03-May-2019 |
Nadav Amit <nadav.amit@gmail.com> |
lib/alloc_page: Zero allocated pages
One of the most important properties of tests is reproducibility. For tests to be reproducible, the same environment should be set on each test invocation.
When
lib/alloc_page: Zero allocated pages
One of the most important properties of tests is reproducibility. For tests to be reproducible, the same environment should be set on each test invocation.
When it comes to memory content, this is not exactly the case in kvm-unit-tests. The tests might, mistakenly or intentionally, assume that memory is zeroed (by the BIOS or KVM). However, failures might not be reproducible if this assumption is broken.
As an example, consider x86 do_iret(), which mistakenly does not push SS:RSP onto the stack on 64-bit mode, although they are popped unconditionally on iret.
Do not assume that memory is zeroed. Clear it once it is allocated to allow tests to easily be reproduced.
Signed-off-by: Nadav Amit <nadav.amit@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
da7eceb3 |
| 03-Apr-2018 |
Thomas Huth <thuth@redhat.com> |
lib/alloc: Fix/check the prototypes in the allocator files
We should make sure that the prototypes match the implementations and thus include the alloc_page.h header from alloc_page.c, and the alloc
lib/alloc: Fix/check the prototypes in the allocator files
We should make sure that the prototypes match the implementations and thus include the alloc_page.h header from alloc_page.c, and the alloc_phys.h header from alloc_phys.c. This way the file can be compiled with -Wmissing-prototypes and -Wstrict-prototypes, too.
Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <1522743052-8266-1-git-send-email-thuth@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
be60de6f |
| 17-Jan-2018 |
Andrew Jones <drjones@redhat.com> |
page_alloc: add yet another memalign
If we want both early alloc ops and alloc_page(), then it's best to just give all the memory to page_alloc and then base the early alloc ops on that.
Signed-off
page_alloc: add yet another memalign
If we want both early alloc ops and alloc_page(), then it's best to just give all the memory to page_alloc and then base the early alloc ops on that.
Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
show more ...
|
#
bf62a925 |
| 17-Jan-2018 |
Andrew Jones <drjones@redhat.com> |
page_alloc: allow initialization before setup_vm call
Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
|
#
937e2392 |
| 23-Oct-2017 |
Paolo Bonzini <pbonzini@redhat.com> |
x86: use alloc_phys
As the next step in making vmalloc/vfree available to every other architecture, register the available memory with phys_alloc for x86 too, and add a service to return the unused
x86: use alloc_phys
As the next step in making vmalloc/vfree available to every other architecture, register the available memory with phys_alloc for x86 too, and add a service to return the unused phys_alloc region. This makes setup_vm architecture- independent.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
2477913f |
| 23-Oct-2017 |
Paolo Bonzini <pbonzini@redhat.com> |
alloc_page: fix off-by-one
It is okay to free up to the very last page of virtual address space, in which case mem+size is zero.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
#
5aca024e |
| 22-Oct-2017 |
Paolo Bonzini <pbonzini@redhat.com> |
lib: move page allocator here from x86
This is another step in porting the x86 (v)malloc implementation to other architectures.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|