36fb9e84 | 21-Feb-2025 |
Sean Christopherson <seanjc@google.com> |
x86: Include libcflat.h in atomic.h for u64 typedef
Include libcflat.h in x86's atomic.h to pick up the u64 typedef, which is used to define atomic64_t. The missing include results in build errors
x86: Include libcflat.h in atomic.h for u64 typedef
Include libcflat.h in x86's atomic.h to pick up the u64 typedef, which is used to define atomic64_t. The missing include results in build errors if a test includes atomic.h without (or before) libcflat.h.
lib/x86/atomic.h:162:1: error: unknown type name ‘u64’ 162 | u64 atomic64_cmpxchg(atomic64_t *v, u64 old, u64 new);
Link: https://lore.kernel.org/r/20250221204148.2171418-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
0164d759 | 01-Jul-2024 |
Binbin Wu <binbin.wu@linux.intel.com> |
x86: Add test cases for LAM_{U48,U57}
This unit test covers: 1. CR3 LAM bits toggles. 2. Memory/MMIO access with user mode address containing LAM metadata.
Signed-off-by: Binbin Wu <binbin.wu@linux
x86: Add test cases for LAM_{U48,U57}
This unit test covers: 1. CR3 LAM bits toggles. 2. Memory/MMIO access with user mode address containing LAM metadata.
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com> Reviewed-by: Chao Gao <chao.gao@intel.com> Link: https://lore.kernel.org/r/20240701073010.91417-5-binbin.wu@linux.intel.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
14520f8e | 01-Jul-2024 |
Robert Hoo <robert.hu@linux.intel.com> |
x86: Add test case for LAM_SUP
This unit test covers: 1. CR4.LAM_SUP toggles. 2. Memory & MMIO access with supervisor mode address with LAM metadata. 3. INVLPG memory operand doesn't contain LAM met
x86: Add test case for LAM_SUP
This unit test covers: 1. CR4.LAM_SUP toggles. 2. Memory & MMIO access with supervisor mode address with LAM metadata. 3. INVLPG memory operand doesn't contain LAM meta data, if the address is non-canonical form then the INVLPG is the same as a NOP (no #GP). 4. INVPCID memory operand (descriptor pointer) could contain LAM meta data, however, the address in the descriptor should be canonical.
In x86/unittests.cfg, add 2 test cases/guest conf, with and without LAM.
LAM feature spec: https://cdrdv2.intel.com/v1/dl/getContent/671368, Chapter LINEAR ADDRESS MASKING (LAM)
Signed-off-by: Robert Hoo <robert.hu@linux.intel.com> Co-developed-by: Binbin Wu <binbin.wu@linux.intel.com> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com> Reviewed-by: Chao Gao <chao.gao@intel.com> Link: https://lore.kernel.org/r/20240701073010.91417-4-binbin.wu@linux.intel.com [sean: s/set/get for the helper, smush tests, call it "lam", use "-cpu max"] Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
0a6b8b7d | 01-Jul-2024 |
Binbin Wu <binbin.wu@linux.intel.com> |
x86: Allow setting of CR3 LAM bits if LAM supported
If LINEAR ADDRESS MASKING (LAM) is supported, VM entry allows CR3.LAM_U48 (bit 62) and CR3.LAM_U57 (bit 61) to be set in CR3 field.
Change the te
x86: Allow setting of CR3 LAM bits if LAM supported
If LINEAR ADDRESS MASKING (LAM) is supported, VM entry allows CR3.LAM_U48 (bit 62) and CR3.LAM_U57 (bit 61) to be set in CR3 field.
Change the test result expectations when setting CR3.LAM_U48 or CR3.LAM_U57 on vmlaunch tests when LAM is supported.
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com> Reviewed-by: Chao Gao <chao.gao@intel.com> Link: https://lore.kernel.org/r/20240701073010.91417-3-binbin.wu@linux.intel.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
8d1acfe4 | 15-Feb-2025 |
Xiong Zhang <xiong.y.zhang@intel.com> |
x86: pmu: Remove duplicate code in pmu_init()
There are totally same code in pmu_init() helper, remove the duplicate code.
Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Xiong Zhang
x86: pmu: Remove duplicate code in pmu_init()
There are totally same code in pmu_init() helper, remove the duplicate code.
Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Reviewed-by: Mingwei Zhang <mizhang@google.com> Link: https://lore.kernel.org/r/20250215013636.1214612-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
d467e659 | 21-Feb-2025 |
Sean Christopherson <seanjc@google.com> |
x86: Move SMP #defines from apic-defs.h to smp.h
Now that the __ASSEMBLY__ versus __ASSEMBLER_ mess is sorted out, move the SMP related #defines from apic-defs.h to smp.h, and drop the comment that
x86: Move SMP #defines from apic-defs.h to smp.h
Now that the __ASSEMBLY__ versus __ASSEMBLER_ mess is sorted out, move the SMP related #defines from apic-defs.h to smp.h, and drop the comment that explains the hackery.
Opportunistically make REALMODE_GDT_LOWMEM visible to assembly code as well, and drop efistart64.S's local copy.
Link: https://lore.kernel.org/r/20250221233832.2251456-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
c8a8a358 | 21-Feb-2025 |
Hang SU <darcysail@gmail.com> |
x86: replace segment selector magic number with macro definition
Add assembly check in desc.h, to replace segment selector magic number with macro definition.
Signed-off-by: Hang SU <darcy.sh@antgr
x86: replace segment selector magic number with macro definition
Add assembly check in desc.h, to replace segment selector magic number with macro definition.
Signed-off-by: Hang SU <darcy.sh@antgroup.com> Link: https://lore.kernel.org/r/20250221225406.2228938-4-seanjc@google.com [sean: fix KERNEL_CS vs. KERNEL_CS32 goof] Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
f372d35f | 21-Feb-2025 |
Sean Christopherson <seanjc@google.com> |
x86: Commit to using __ASSEMBLER__ instead of __ASSEMBLY__
Convert all two of x86's anti-assembly #ifdefs from __ASSEMBLY__ to __ASSEMBLER__. Usage of __ASSEMBLY__ was inherited blindly from the Li
x86: Commit to using __ASSEMBLER__ instead of __ASSEMBLY__
Convert all two of x86's anti-assembly #ifdefs from __ASSEMBLY__ to __ASSEMBLER__. Usage of __ASSEMBLY__ was inherited blindly from the Linux kernel, and must be manually defined, e.g. through build rules or with explicit #defines in assembly code. __ASSEMBLER__ on the other hand is automatically defined by the compiler when preprocessing assembly, i.e. doesn't require manually #defines for the code to function correctly.
Convert only x86 for the time being, as x86 doesn't actually rely on __ASSEMBLY__ (a clever observer will note that it's never #defined on x86). E.g. trying to include x86's page.h doesn't work as is. All other architectures actually rely on __ASSEMBLY__, and will be dealt with separately.
Note, while only gcc appears to officially document __ASSEMBLER__, clang has followed suit since at least clang 6.0, and clang 6.0 doesn't come remotely close to being able to comple KVM-Unit-Tests.
Link: https://gcc.gnu.org/onlinedocs/cpp/Standard-Predefined-Macros.html#Standard-Predefined-Macros Link: https://lore.kernel.org/r/20250221225406.2228938-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
4c5d3713 | 21-Feb-2025 |
Sean Christopherson <seanjc@google.com> |
x86: Move descriptor table selector #defines to the top of desc.h
Hoist the selector #defines in desc.h to the very top so that they can be exposed to assembly code with minimal #ifdefs.
No functio
x86: Move descriptor table selector #defines to the top of desc.h
Hoist the selector #defines in desc.h to the very top so that they can be exposed to assembly code with minimal #ifdefs.
No functional change intended.
Link: https://lore.kernel.org/r/20250221225406.2228938-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
f6257e24 | 15-Feb-2025 |
Maxim Levitsky <mlevitsk@redhat.com> |
x86: Add testcases for writing (non)canonical LA57 values to MSRs and bases
Extend the LA57 test to thoroughly validate the canonical checks that are done when setting various MSRs and CPU registers
x86: Add testcases for writing (non)canonical LA57 values to MSRs and bases
Extend the LA57 test to thoroughly validate the canonical checks that are done when setting various MSRs and CPU registers. CPUs that support LA57 have convoluted behavior when it comes to canonical checks. Writes to MSRs, descriptor table bases, and for TLB invalidation instructions, don't consult CR4.LA57, and so a value that is 57-bit canonical but not 48-bit canonical is allowed irrespective of CR4.LA57 if the CPU supports 5-level paging.
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Link: https://lore.kernel.org/r/20240907005440.500075-5-mlevitsk@redhat.com Co-developed-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20250215013018.1210432-6-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
b88e90e6 | 15-Feb-2025 |
Maxim Levitsky <mlevitsk@redhat.com> |
x86: Move struct invpcid_desc descriptor to processor.h
Move struct invpcid_desc descriptor to processor.h so that it can be used in tests that are external to pcid.c.
Signed-off-by: Maxim Levitsky
x86: Move struct invpcid_desc descriptor to processor.h
Move struct invpcid_desc descriptor to processor.h so that it can be used in tests that are external to pcid.c.
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Link: https://lore.kernel.org/r/20240907005440.500075-4-mlevitsk@redhat.com Link: https://lore.kernel.org/r/20250215013018.1210432-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
b1f3eec1 | 15-Feb-2025 |
Maxim Levitsky <mlevitsk@redhat.com> |
x86: Add a few functions for gdt manipulation
Add a few functions that will be used to manipulate various segment bases that are loaded via GDT.
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
x86: Add a few functions for gdt manipulation
Add a few functions that will be used to manipulate various segment bases that are loaded via GDT.
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Link: https://lore.kernel.org/r/20240907005440.500075-3-mlevitsk@redhat.com Link: https://lore.kernel.org/r/20250215013018.1210432-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
5047281a | 15-Feb-2025 |
Maxim Levitsky <mlevitsk@redhat.com> |
x86: Add _safe() and _fep_safe() variants to segment base load instructions
Add _safe() and _fep_safe() helpers for segment/base instructions; the helpers will be used to validate various ways of se
x86: Add _safe() and _fep_safe() variants to segment base load instructions
Add _safe() and _fep_safe() helpers for segment/base instructions; the helpers will be used to validate various ways of setting the segment bases and GDT/LDT bases.
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Link: https://lore.kernel.org/r/20240907005440.500075-2-mlevitsk@redhat.com Link: https://lore.kernel.org/r/20250215013018.1210432-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
b94ace2e | 15-Feb-2025 |
Sean Christopherson <seanjc@google.com> |
x86: Increase per-CPU stack/data area to 12KiB
Increase the size of the per-CPU stack/data area from one page to three, i.e. from 4KiB to 12KiB. KVM-Unit-Tests currently places the per-CPU data at
x86: Increase per-CPU stack/data area to 12KiB
Increase the size of the per-CPU stack/data area from one page to three, i.e. from 4KiB to 12KiB. KVM-Unit-Tests currently places the per-CPU data at the bottom of the stack page, i.e. the stack "page" is actually a page minus the size of the per-CPU area. And of course there's no guard page or buffer in between the two, and so overflowing the stack clobbers per-CPU data and sends tests into the weeds in weird ways.
Punt on less awful infrastructure, and settle for fixing the most egregious problem of tests having less than 4KiB of stack to work with.
Link: https://lore.kernel.org/r/20250215012032.1206409-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
2821b32d | 15-Feb-2025 |
Sean Christopherson <seanjc@google.com> |
x86: Add a macro for the size of the per-CPU stack/data area
Add a macro to define the size of the per-CPU stack/data area so that it's somewhat possible to make sense of the madness.
Link: https:/
x86: Add a macro for the size of the per-CPU stack/data area
Add a macro to define the size of the per-CPU stack/data area so that it's somewhat possible to make sense of the madness.
Link: https://lore.kernel.org/r/20250215012032.1206409-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
2f3c0286 | 14-Feb-2025 |
Nicolas Saenz Julienne <nsaenz@amazon.com> |
x86: Make set/clear_bit() atomic
x86 is the only architecture that defines set/clear_bit() as non-atomic. This makes it incompatible with arch-agnostic code that might implicitly require atomicity.
x86: Make set/clear_bit() atomic
x86 is the only architecture that defines set/clear_bit() as non-atomic. This makes it incompatible with arch-agnostic code that might implicitly require atomicity. And it was observed to corrupt the 'online_cpus' bitmap, as non BSP CPUs perform RmWs on the bitmap concurrently during bring up. See:
ap_start64() save_id() set_bit(apic_id(), online_cpus)
Address this by making set/clear_bit() atomic.
Signed-off-by: Nicolas Saenz Julienne <nsaenz@amazon.com> Link: https://lore.kernel.org/r/20250214173644.22895-1-nsaenz@amazon.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
386ed5c2 | 11-Dec-2023 |
Oliver Upton <oliver.upton@linux.dev> |
nVMX: add test for posted interrupts
Test virtual posted interrupts under the following conditions:
- vTPR[7:4] >= VECTOR[7:4]: Expect the L2 interrupt to be blocked. The bit correspondin
nVMX: add test for posted interrupts
Test virtual posted interrupts under the following conditions:
- vTPR[7:4] >= VECTOR[7:4]: Expect the L2 interrupt to be blocked. The bit corresponding to the posted interrupt should be set in L2's vIRR. Test with a running guest.
- vTPR[7:4] < VECTOR[7:4]: Expect the interrupt to be delivered and the ISR to execute once. Test with a running and halted guest.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Co-developed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Jim Mattson <jmattson@google.com> Link: https://lore.kernel.org/r/20231211185552.3856862-6-jmattson@google.com [sean: add a dedicated SPIN_IRR op to clarify and enhance coverage] Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
a917f7c7 | 11-Dec-2023 |
Marc Orr <marc.orr@gmail.com> |
nVMX: test nested "virtual-interrupt delivery"
Add test coverage for recognizing and delivering virtual interrupts via VMX's "virtual-interrupt delivery" feature, in the following two scenarios:
nVMX: test nested "virtual-interrupt delivery"
Add test coverage for recognizing and delivering virtual interrupts via VMX's "virtual-interrupt delivery" feature, in the following two scenarios:
1. There's a pending interrupt at VM-entry. 2. There's a pending interrupt during TPR virtualization.
Signed-off-by: Marc Orr (Google) <marc.orr@gmail.com> Co-developed-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Co-developed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Jim Mattson <jmattson@google.com> Link: https://lore.kernel.org/r/20231211185552.3856862-3-jmattson@google.com [sean: omit from base 'vmx' test] Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
fc17d527 | 06-Mar-2024 |
Sean Christopherson <seanjc@google.com> |
x86/pmu: Iterate over adaptive PEBS flag combinations
Iterate over all possible combinations of adaptive PEBS flags, instead of simply testing each flag individually. There are currently only 16 po
x86/pmu: Iterate over adaptive PEBS flag combinations
Iterate over all possible combinations of adaptive PEBS flags, instead of simply testing each flag individually. There are currently only 16 possible combinations, i.e. there's no reason not to exhaustively test every one.
Opportunistically rename PEBS_DATACFG_GP to PEBS_DATACFG_GPRS to differentiate it from general purposes *counters*, which KVM also tends to abbreviate as "GP".
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://lore.kernel.org/r/20240306230153.786365-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
51b87946 | 17-Apr-2024 |
Mingwei Zhang <mizhang@google.com> |
x86: Add FEP support on read/write register instructions
Add FEP support on read/write register instructions to enable testing rdmsr and wrmsr when force emulation is turned on.
Suggested-by: Sean
x86: Add FEP support on read/write register instructions
Add FEP support on read/write register instructions to enable testing rdmsr and wrmsr when force emulation is turned on.
Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Mingwei Zhang <mizhang@google.com> Link: https://lore.kernel.org/r/20240417232906.3057638-2-mizhang@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
3ed8e382 | 08-Jan-2024 |
Dan Wu <dan1.wu@intel.com> |
x86/asyncpf: Update async page fault test for IRQ-based "page ready"
KVM switched to use interrupt for 'page ready' APF event since Linux v5.10 and the legacy mechanism using #PF was deprecated. Int
x86/asyncpf: Update async page fault test for IRQ-based "page ready"
KVM switched to use interrupt for 'page ready' APF event since Linux v5.10 and the legacy mechanism using #PF was deprecated. Interrupt-based 'page-ready' notification requires KVM_ASYNC_PF_DELIVERY_AS_INT to be set as well in MSR_KVM_ASYNC_PF_EN to enable asyncpf.
Update asyncpf.c for the new interrupt-based notification to check for (KVM_FEATURE_ASYNC_PF && KVM_FEATURE_ASYNC_PF_INT) support, and implement interrupt-based 'page-ready' handler with the necessary struct changes.
To run this test, add the QEMU option "-cpu host" to check CPUID, since KVM_FEATURE_ASYNC_PF_INT can't be detected without "-cpu host".
Opportunistically update the "help" section to describe how to setup cgroups for cgroup v1 vs. v2.
Signed-off-by: Dan Wu <dan1.wu@intel.com> Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Link: https://lore.kernel.org/r/20240108063014.41117-1-dan1.wu@intel.com [sean: report skip instead of fail if no async #PFs occur, massage changelog] Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
95a94088 | 04-May-2024 |
Nicholas Piggin <npiggin@gmail.com> |
lib: Use vmalloc.h for setup_mmu definition
There is no good reason to put setup_vm in libcflat.h when it's defined in vmalloc.h.
Acked-by: Andrew Jones <andrew.jones@linux.dev> Signed-off-by: Nich
lib: Use vmalloc.h for setup_mmu definition
There is no good reason to put setup_vm in libcflat.h when it's defined in vmalloc.h.
Acked-by: Andrew Jones <andrew.jones@linux.dev> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Message-ID: <20240504122841.1177683-24-npiggin@gmail.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
show more ...
|
a8a78d75 | 05-Mar-2024 |
Andrew Jones <andrew.jones@linux.dev> |
treewide: lib/stack: Fix backtrace
We should never pass the result of __builtin_frame_address(0) to another function since the compiler is within its rights to pop the frame to which it points befor
treewide: lib/stack: Fix backtrace
We should never pass the result of __builtin_frame_address(0) to another function since the compiler is within its rights to pop the frame to which it points before making the function call, as may be done for tail calls. Nobody has complained about backtrace(), so likely all compilations have been inlining backtrace_frame(), not dropping the frame on the tail call, or nobody is looking at traces. However, for riscv, when built for EFI, it does drop the frame on the tail call, and it was noticed. Preemptively fix backtrace() for all architectures.
Fixes: 52266791750d ("lib: backtrace printing") Acked-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Andrew Jones <andrew.jones@linux.dev>
show more ...
|
6a49efdb | 13-Apr-2023 |
Mathias Krause <minipli@grsecurity.net> |
x86/fault_test: Preserve exception handler
fault_test() replaces the exception handler for in-kernel tests with a longjmp() based exception handling. However, it leaves the exception handler in plac
x86/fault_test: Preserve exception handler
fault_test() replaces the exception handler for in-kernel tests with a longjmp() based exception handling. However, it leaves the exception handler in place which may confuse later test code triggering the same exception without installing a handler first.
Fix this be restoring the previous exception handler, as running the longjmp() handler out of context will lead to no good.
Signed-off-by: Mathias Krause <minipli@grsecurity.net> Link: https://lore.kernel.org/r/20230413184219.36404-11-minipli@grsecurity.net Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
47a84f27 | 13-Apr-2023 |
Mathias Krause <minipli@grsecurity.net> |
x86/run_in_user: Reload SS after successful return
Complement commit 663f9e447b98 ("x86: Fix a #GP from occurring in usermode library's exception handlers") and restore SS on a regular return as wel
x86/run_in_user: Reload SS after successful return
Complement commit 663f9e447b98 ("x86: Fix a #GP from occurring in usermode library's exception handlers") and restore SS on a regular return as well.
The INT-based "syscall" will make it get loaded with the NULL selector (see SDM Vol. 1, Interrupt and Exception Behavior in 64-Bit Mode: "The new SS is set to NULL if there is a change in CPL.") which reduces the coverage provided by emulator64.c's "mov null, %%ss" test, as SS is already loaded with the NULL selector.
Fix this by loading SS with KERNEL_DS after a successful userland function call as well, as we already do in case of exceptions.
Signed-off-by: Mathias Krause <minipli@grsecurity.net> Link: https://lore.kernel.org/r/20230413184219.36404-10-minipli@grsecurity.net [sean: use "rm" constraint, rephrase impact on emulator64's test] Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|