#
14b54ed7 |
| 26-Jul-2022 |
Paolo Bonzini <pbonzini@redhat.com> |
Merge tag 'for_paolo' of https://github.com/sean-jc/kvm-unit-tests into HEAD
x86 fixes, cleanups, and new sub-tests:
- Bug fix for the VMX-preemption timer expiration test - Refactor SVM tests
Merge tag 'for_paolo' of https://github.com/sean-jc/kvm-unit-tests into HEAD
x86 fixes, cleanups, and new sub-tests:
- Bug fix for the VMX-preemption timer expiration test - Refactor SVM tests to split out NPT tests - Add tests for MCE banks to MSR test - Add SMP Support for x86 UEFI tests - x86: nVMX: Add VMXON #UD test (and exception cleanup) - PMU cleanup and related nVMX bug fixes
show more ...
|
#
d36b378f |
| 15-Jun-2022 |
Varad Gautam <varad.gautam@suse.com> |
x86: Move ap_init() to smp.c
ap_init() copies the SIPI vector to lowmem, sends INIT/SIPI to APs and waits on the APs to come up.
Port this routine to C from asm and move it to smp.c to allow sharin
x86: Move ap_init() to smp.c
ap_init() copies the SIPI vector to lowmem, sends INIT/SIPI to APs and waits on the APs to come up.
Port this routine to C from asm and move it to smp.c to allow sharing this functionality between the EFI (-fPIC) and non-EFI builds.
Call ap_init() from the EFI setup path to reset the APs to a known location.
Signed-off-by: Varad Gautam <varad.gautam@suse.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20220615232943.1465490-4-seanjc@google.com
show more ...
|
#
9f0ae301 |
| 09-Jun-2021 |
Cornelia Huck <cohuck@redhat.com> |
lib: unify header guards
Standardize header guards to _LIB_HEADER_H_.
Signed-off-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Message-Id: <20210609143712.
lib: unify header guards
Standardize header guards to _LIB_HEADER_H_.
Signed-off-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Message-Id: <20210609143712.60933-3-cohuck@redhat.com>
show more ...
|
#
550b4683 |
| 15-Jan-2021 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc: replace areas with more generic flags
Replace the areas parameter with a more generic flags parameter. This allows for up to 16 allocation areas and 16 allocation flags.
This patch intro
lib/alloc: replace areas with more generic flags
Replace the areas parameter with a more generic flags parameter. This allows for up to 16 allocation areas and 16 allocation flags.
This patch introduces the flags and changes the names of the funcions, subsequent patches will actually wire up the flags to do something.
The first two flags introduced are: - FLAG_DONTZERO to ask the allocated memory not to be zeroed - FLAG_FRESH to indicate that the allocated memory should have not been touched (READ or written to) in any way since boot.
This patch also fixes the order of arguments to consistently have alignment first and then size, thereby fixing a bug where the two values would get swapped.
Fixes: 8131e91a4b61 ("lib/alloc_page: complete rewrite of the page allocator")
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Message-Id: <20210115123730.381612-10-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
01d8952b |
| 15-Jan-2021 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc_page: fix and improve the page allocator
This patch introduces some improvements to the code, mostly readability improvements, but also some semantic details, and improvements in the docum
lib/alloc_page: fix and improve the page allocator
This patch introduces some improvements to the code, mostly readability improvements, but also some semantic details, and improvements in the documentation.
* introduce and use pfn_t to semantically tag parameters as PFNs * remove the PFN macro, use virt_to_pfn instead * rename area_or_metadata_contains and area_contains to area_contains_pfn and usable_area_contains_pfn respectively * fix/improve comments in lib/alloc_page.h * move some wrapper functions to the header
Fixes: 8131e91a4b61 ("lib/alloc_page: complete rewrite of the page allocator") Fixes: 34c950651861 ("lib/alloc_page: allow reserving arbitrary memory ranges")
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20210115123730.381612-6-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
322cdd64 |
| 15-Jan-2021 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/asm: Fix definitions of memory areas
Fix the definitions of the memory areas.
Bring the headers in line with the rest of the asm headers, by having the appropriate #ifdef _ASM$ARCH_ guarding th
lib/asm: Fix definitions of memory areas
Fix the definitions of the memory areas.
Bring the headers in line with the rest of the asm headers, by having the appropriate #ifdef _ASM$ARCH_ guarding the headers.
Fixes: d74708246bd9 ("lib/asm: Add definitions of memory areas")
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Message-Id: <20210115123730.381612-5-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
34c95065 |
| 02-Oct-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc_page: allow reserving arbitrary memory ranges
Two new functions are introduced, that allow specific memory ranges to be reserved and freed.
This is useful when a testcase needs memory at
lib/alloc_page: allow reserving arbitrary memory ranges
Two new functions are introduced, that allow specific memory ranges to be reserved and freed.
This is useful when a testcase needs memory at very specific addresses, with the guarantee that the page allocator will not touch those pages.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20201002154420.292134-8-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
f90ddba3 |
| 02-Oct-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc: simplify free and malloc
Remove the size parameter from the various free functions
Since the backends can handle the allocation sizes on their own, simplify the generic malloc wrappers.
lib/alloc: simplify free and malloc
Remove the size parameter from the various free functions
Since the backends can handle the allocation sizes on their own, simplify the generic malloc wrappers.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20201002154420.292134-6-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
8131e91a |
| 02-Oct-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc_page: complete rewrite of the page allocator
This is a complete rewrite of the page allocator.
This will bring a few improvements: * no need to specify the size when freeing * allocate sm
lib/alloc_page: complete rewrite of the page allocator
This is a complete rewrite of the page allocator.
This will bring a few improvements: * no need to specify the size when freeing * allocate small areas with a large alignment without wasting memory * ability to initialize and use multiple memory areas (e.g. DMA) * more sanity checks
A few things have changed: * initialization cannot be done with free_pages like before, page_alloc_init_area has to be used instead
Arch-specific changes: * s390x now uses the area below 2GiB for SMP lowcore initialization.
Details: Each memory area has metadata at the very beginning. The metadata is a byte array with one entry per usable page (so, excluding the metadata itself). Each entry indicates if the page is special (unused for now), if it is allocated, and the order of the block. Both free and allocated pages are part of larger blocks.
Some more fixed size metadata is present in a fixed-size static array. This metadata contains start and end page frame numbers, the pointer to the metadata array, and the array of freelists. The array of freelists has an entry for each possible order (indicated by the macro NLISTS, defined as BITS_PER_LONG - PAGE_SHIFT).
On allocation, if the free list for the needed size is empty, larger blocks are split. When a small allocation with a large alignment is requested, an appropriately large block is split, to guarantee the alignment.
When a block is freed, an attempt will be made to merge it into the neighbour, iterating the process as long as possible.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20201002154420.292134-5-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
9e801bd9 |
| 06-Jul-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc_page: move get_order and is_power_of_2 to a bitops.h
The functions get_order and is_power_of_2 are simple and should probably be in a header, like similar simple functions in bitops.h
Sin
lib/alloc_page: move get_order and is_power_of_2 to a bitops.h
The functions get_order and is_power_of_2 are simple and should probably be in a header, like similar simple functions in bitops.h
Since they concern bit manipulation, the logical place for them is in bitops.h
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Message-Id: <20200706164324.81123-4-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
73f4b202 |
| 06-Jul-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc_page: change some parameter types
For size parameters, size_t is probably semantically more appropriate than unsigned long (although they map to the same value).
For order, unsigned long
lib/alloc_page: change some parameter types
For size parameters, size_t is probably semantically more appropriate than unsigned long (although they map to the same value).
For order, unsigned long is just too big. Also, get_order returns an unsigned int anyway.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Jim Mattson <jmattson@google.com> Message-Id: <20200706164324.81123-3-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
4a4f8af2 |
| 22-Jun-2020 |
Claudio Imbrenda <imbrenda@linux.ibm.com> |
lib/alloc_page: make get_order return unsigned int
Since get_order never returns a negative value, it makes sense to make it return an unsigned int.
The returned value will be in practice always ve
lib/alloc_page: make get_order return unsigned int
Since get_order never returns a negative value, it makes sense to make it return an unsigned int.
The returned value will be in practice always very small, a u8 would probably also do the trick.
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Message-Id: <20200622162141.279716-8-imbrenda@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
f22e527d |
| 02-Apr-2020 |
Eric Auger <eric.auger@redhat.com> |
page_alloc: Introduce get_order()
Compute the power of 2 order of a size. Use it in page_memalign. Other users are looming.
Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Andrew J
page_alloc: Introduce get_order()
Compute the power of 2 order of a size. Use it in page_memalign. Other users are looming.
Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Andrew Jones <drjones@redhat.com>
show more ...
|
#
e9554497 |
| 14-Sep-2019 |
Marc Orr <marcorr@google.com> |
x86: nvmx: test max atomic switch MSRs
Excerise nested VMX's atomic MSR switch code (e.g., VM-entry MSR-load list) at the maximum number of MSRs supported, as described in the SDM, in the appendix c
x86: nvmx: test max atomic switch MSRs
Excerise nested VMX's atomic MSR switch code (e.g., VM-entry MSR-load list) at the maximum number of MSRs supported, as described in the SDM, in the appendix chapter titled "MISCELLANEOUS DATA".
Suggested-by: Jim Mattson <jmattson@google.com> Signed-off-by: Marc Orr <marcorr@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
da7eceb3 |
| 03-Apr-2018 |
Thomas Huth <thuth@redhat.com> |
lib/alloc: Fix/check the prototypes in the allocator files
We should make sure that the prototypes match the implementations and thus include the alloc_page.h header from alloc_page.c, and the alloc
lib/alloc: Fix/check the prototypes in the allocator files
We should make sure that the prototypes match the implementations and thus include the alloc_page.h header from alloc_page.c, and the alloc_phys.h header from alloc_phys.c. This way the file can be compiled with -Wmissing-prototypes and -Wstrict-prototypes, too.
Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <1522743052-8266-1-git-send-email-thuth@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
be60de6f |
| 17-Jan-2018 |
Andrew Jones <drjones@redhat.com> |
page_alloc: add yet another memalign
If we want both early alloc ops and alloc_page(), then it's best to just give all the memory to page_alloc and then base the early alloc ops on that.
Signed-off
page_alloc: add yet another memalign
If we want both early alloc ops and alloc_page(), then it's best to just give all the memory to page_alloc and then base the early alloc ops on that.
Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
show more ...
|
#
bf62a925 |
| 17-Jan-2018 |
Andrew Jones <drjones@redhat.com> |
page_alloc: allow initialization before setup_vm call
Signed-off-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
|
#
5aca024e |
| 22-Oct-2017 |
Paolo Bonzini <pbonzini@redhat.com> |
lib: move page allocator here from x86
This is another step in porting the x86 (v)malloc implementation to other architectures.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|