#
dca3f4c0 |
| 24-Feb-2025 |
Paolo Bonzini <pbonzini@redhat.com> |
Merge tag 'kvm-x86-2025.02.21' of https://github.com/kvm-x86/kvm-unit-tests into HEAD
KVM-Unit-Tests x86 changes:
- Expand the per-CPU data+stack area to 12KiB per CPU to reduce the probability
Merge tag 'kvm-x86-2025.02.21' of https://github.com/kvm-x86/kvm-unit-tests into HEAD
KVM-Unit-Tests x86 changes:
- Expand the per-CPU data+stack area to 12KiB per CPU to reduce the probability of tests overflowing their stack and clobbering pre-CPU data.
- Add testcases for LA57 canonical checks.
- Add testcases for LAM.
- Add a smoke test to make sure KVM doesn't bleed split-lock #AC/#DB into the guest.
- Fix many warts and bugs in the PMU test, and prepare it for PMU version 5 and beyond.
- Many misc fixes and cleanups.
show more ...
|
#
a33a3ac8 |
| 01-Jul-2024 |
Binbin Wu <binbin.wu@linux.intel.com> |
x86: Add test case for INVVPID with LAM
LAM applies to the linear address of INVVPID operand, however, it doesn't apply to the linear address in the INVVPID descriptor.
The added cases use tagged o
x86: Add test case for INVVPID with LAM
LAM applies to the linear address of INVVPID operand, however, it doesn't apply to the linear address in the INVVPID descriptor.
The added cases use tagged operand or tagged target invalidation address to make sure the behaviors are expected when LAM is on.
Also, INVVPID case using tagged operand can be used as the common test cases for VMX instruction VMExits.
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com> Reviewed-by: Chao Gao <chao.gao@intel.com> Link: https://lore.kernel.org/r/20240701073010.91417-6-binbin.wu@linux.intel.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
0a6b8b7d |
| 01-Jul-2024 |
Binbin Wu <binbin.wu@linux.intel.com> |
x86: Allow setting of CR3 LAM bits if LAM supported
If LINEAR ADDRESS MASKING (LAM) is supported, VM entry allows CR3.LAM_U48 (bit 62) and CR3.LAM_U57 (bit 61) to be set in CR3 field.
Change the te
x86: Allow setting of CR3 LAM bits if LAM supported
If LINEAR ADDRESS MASKING (LAM) is supported, VM entry allows CR3.LAM_U48 (bit 62) and CR3.LAM_U57 (bit 61) to be set in CR3 field.
Change the test result expectations when setting CR3.LAM_U48 or CR3.LAM_U57 on vmlaunch tests when LAM is supported.
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com> Reviewed-by: Chao Gao <chao.gao@intel.com> Link: https://lore.kernel.org/r/20240701073010.91417-3-binbin.wu@linux.intel.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
d5a6cfac |
| 18-Nov-2024 |
Zide Chen <zide.chen@intel.com> |
nVMX: Account for gaps in fixed performance counters
Update the nVMX PERF_GLOBAL_CTRL test to play nice with PMUs that have a discontiguous set of PMCs. On CPUs with PMU Version 5 or later, the set
nVMX: Account for gaps in fixed performance counters
Update the nVMX PERF_GLOBAL_CTRL test to play nice with PMUs that have a discontiguous set of PMCs. On CPUs with PMU Version 5 or later, the set of fixed PMCs may not be contiguous. Use the logic recommended by the Intel SDM to determine if a Fixed Counter is supported:
FxCtr[i]_is_supported := ECX[i] || (EDX[4:0] > i);
For example, it's perfectly valid to have CPUID.0AH.EDX[4:0] == 3 and CPUID.0AH.ECX == 0x77, but checking the fixed counter index against CPUID.0AH.EDX[4:0] only, will deem that FxCtr[6:4] are not supported.
Opportunistically add anythread_deprecated to cpuidA_edx.
Signed-off-by: Zide Chen <zide.chen@intel.com> Reviewed-by: Zhenyu Wang <zhenyuw@linux.intel.com> Link: https://lore.kernel.org/r/20241118225207.16596-1-zide.chen@intel.com [sean: massage changelog, keep "counters" in the names] Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
05fbb364 |
| 15-Feb-2025 |
Maxim Levitsky <mlevitsk@redhat.com> |
nVMX: add a test for canonical checks of various host state vmcs12 fields.
This test tests canonical VM entry checks of various host state fields (mostly segment bases) in vmcs12.
Signed-off-by: Ma
nVMX: add a test for canonical checks of various host state vmcs12 fields.
This test tests canonical VM entry checks of various host state fields (mostly segment bases) in vmcs12.
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Link: https://lore.kernel.org/r/20240907005440.500075-6-mlevitsk@redhat.com [sean: print expected vs. actual in reports, use descriptive value names] Link: https://lore.kernel.org/r/20250215013018.1210432-7-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
afbea997 |
| 14-Feb-2025 |
Sean Christopherson <seanjc@google.com> |
nVMX: Clear A/D enable bit in EPTP after negative testcase on non-A/D host
Clear the Access/Dirty enable flag in EPTP after the negative testcase that verifies enabling A/D bits results in failed VM
nVMX: Clear A/D enable bit in EPTP after negative testcase on non-A/D host
Clear the Access/Dirty enable flag in EPTP after the negative testcase that verifies enabling A/D bits results in failed VM-Entry if A/D bits aren't supported. Leaving the A/D bit set causes the subsequent tests to fail on A/D-disabled hosts.
Fixes: 1d70eb82 ("nVMX x86: Check EPTP on vmentry of L2 guests") Link: https://lore.kernel.org/r/20250214160639.981517-1-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
dcec966f |
| 20-Jun-2024 |
Paolo Bonzini <pbonzini@redhat.com> |
Merge tag 'kvm-x86-2024.06.14' of https://github.com/kvm-x86/kvm-unit-tests into HEAD
x86 fixes, cleanups, and new testcases:
- Add a testcase to verify that KVM doesn't inject a triple fault (or
Merge tag 'kvm-x86-2024.06.14' of https://github.com/kvm-x86/kvm-unit-tests into HEAD
x86 fixes, cleanups, and new testcases:
- Add a testcase to verify that KVM doesn't inject a triple fault (or any other "error") if a nested VM is run with an EP4TA pointing MMIO.
- Play nice with CR4.CET in test_vmxon_bad_cr()
- Force emulation when testing MSR_IA32_FLUSH_CMD to workaround an issue where Skylake CPUs don't follow the architecturally defined behavior, and so that the test doesn't break if/when new bits are supported by future CPUs.
- Rework the async #PF test to support IRQ-based page-ready notifications.
- Fix a variety of issues related to adaptive PEBS.
- Add several nested VMX tests for virtual interrupt delivery and posted interrupts.
- Ensure PAT is loaded with the default value after the nVMX PAT tests (failure to do so was causing tests to fail due to all memory being UC).
- Misc cleanups.
show more ...
|
#
ee1d79c3 |
| 05-Jun-2024 |
Sean Christopherson <seanjc@google.com> |
nVMX: Verify KVM actually loads the value in HOST_PAT into the PAT MSR
Check that the PAT MSR is actually loaded with vmcs.HOST_PAT on VM-Exit in the testcase that shoves all legal PAT values into H
nVMX: Verify KVM actually loads the value in HOST_PAT into the PAT MSR
Check that the PAT MSR is actually loaded with vmcs.HOST_PAT on VM-Exit in the testcase that shoves all legal PAT values into HOST_PAT.
Link: https://lore.kernel.org/r/20240605224527.2907272-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
184ee0d5 |
| 05-Jun-2024 |
Sean Christopherson <seanjc@google.com> |
nVMX: Ensure host's PAT is loaded at the end of all VMX tests
Load the host's original PAT on VM-Exit by default in all VMX tests, and manually write PAT (if necessary) with the original value in th
nVMX: Ensure host's PAT is loaded at the end of all VMX tests
Load the host's original PAT on VM-Exit by default in all VMX tests, and manually write PAT (if necessary) with the original value in the test that verifies all legal PAT values can be loaded via GUEST_PAT and HOST_PAT. Failure to (re)load the correct host PAT results in all tests that run after test_load_host_pat() using UC memtype for all memory.
Opportunistically fix a message goof for the ENT_LOAD_PAT=0 case.
Reported-by: Xiangfei Ma <xiangfeix.ma@intel.com> Reviewed-by: Yan Zhao <yan.y.zhao@intel.com> Link: https://lore.kernel.org/r/20240605224527.2907272-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
386ed5c2 |
| 11-Dec-2023 |
Oliver Upton <oliver.upton@linux.dev> |
nVMX: add test for posted interrupts
Test virtual posted interrupts under the following conditions:
- vTPR[7:4] >= VECTOR[7:4]: Expect the L2 interrupt to be blocked. The bit correspondin
nVMX: add test for posted interrupts
Test virtual posted interrupts under the following conditions:
- vTPR[7:4] >= VECTOR[7:4]: Expect the L2 interrupt to be blocked. The bit corresponding to the posted interrupt should be set in L2's vIRR. Test with a running guest.
- vTPR[7:4] < VECTOR[7:4]: Expect the interrupt to be delivered and the ISR to execute once. Test with a running and halted guest.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Co-developed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Jim Mattson <jmattson@google.com> Link: https://lore.kernel.org/r/20231211185552.3856862-6-jmattson@google.com [sean: add a dedicated SPIN_IRR op to clarify and enhance coverage] Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
3238da6b |
| 11-Dec-2023 |
Marc Orr <marc.orr@gmail.com> |
nVMX: add self-IPI tests to vmx_basic_vid_test
Extend the VMX "virtual-interrupt delivery test", vmx_basic_vid_test, to verify that virtual-interrupt delivery is triggered by a self-IPI in L2.
Sign
nVMX: add self-IPI tests to vmx_basic_vid_test
Extend the VMX "virtual-interrupt delivery test", vmx_basic_vid_test, to verify that virtual-interrupt delivery is triggered by a self-IPI in L2.
Signed-off-by: Marc Orr (Google) <marc.orr@gmail.com> Co-developed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Jim Mattson <jmattson@google.com> Link: https://lore.kernel.org/r/20231211185552.3856862-5-jmattson@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
d485a75d |
| 11-Dec-2023 |
Marc Orr <marc.orr@gmail.com> |
nVMX: test nested EOI virtualization
Add a test for nested VMs that invoke EOI virtualization. Specifically, check that a pending low-priority interrupt, masked by a higher-priority interrupt, is sc
nVMX: test nested EOI virtualization
Add a test for nested VMs that invoke EOI virtualization. Specifically, check that a pending low-priority interrupt, masked by a higher-priority interrupt, is scheduled via "virtual-interrupt delivery," after the higher-priority interrupt executes EOI.
Signed-off-by: Marc Orr (Google) <marc.orr@gmail.com> Co-developed-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Co-developed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Jim Mattson <jmattson@google.com> Link: https://lore.kernel.org/r/20231211185552.3856862-4-jmattson@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
a917f7c7 |
| 11-Dec-2023 |
Marc Orr <marc.orr@gmail.com> |
nVMX: test nested "virtual-interrupt delivery"
Add test coverage for recognizing and delivering virtual interrupts via VMX's "virtual-interrupt delivery" feature, in the following two scenarios:
nVMX: test nested "virtual-interrupt delivery"
Add test coverage for recognizing and delivering virtual interrupts via VMX's "virtual-interrupt delivery" feature, in the following two scenarios:
1. There's a pending interrupt at VM-entry. 2. There's a pending interrupt during TPR virtualization.
Signed-off-by: Marc Orr (Google) <marc.orr@gmail.com> Co-developed-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Co-developed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Jim Mattson <jmattson@google.com> Link: https://lore.kernel.org/r/20231211185552.3856862-3-jmattson@google.com [sean: omit from base 'vmx' test] Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
eef1e3d2 |
| 11-Dec-2023 |
Jim Mattson <jmattson@google.com> |
nVMX: Enable x2APIC mode for virtual-interrupt delivery tests
Since "virtualize x2APIC mode" is enabled for these tests, call enable_x2apic() so that the x2apic_ops function table will be installed.
nVMX: Enable x2APIC mode for virtual-interrupt delivery tests
Since "virtualize x2APIC mode" is enabled for these tests, call enable_x2apic() so that the x2apic_ops function table will be installed.
Signed-off-by: Jim Mattson <jmattson@google.com> Link: https://lore.kernel.org/r/20231211185552.3856862-2-jmattson@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
9b27e5d6 |
| 13-Sep-2023 |
Yang Weijiang <weijiang.yang@intel.com> |
nVMX: Introduce new vmx_basic MSR feature bit for vmx tests
Introduce IA32_VMX_BASIC[bit56] support, i.e., skipping HW consistency check for event error code if the bit is supported..
CET KVM enabl
nVMX: Introduce new vmx_basic MSR feature bit for vmx tests
Introduce IA32_VMX_BASIC[bit56] support, i.e., skipping HW consistency check for event error code if the bit is supported..
CET KVM enabling series introduces the vmx_basic_msr feature bit and it causes some of the original test cases expecting a VM entry failure end up with a successful result and the selftests report test failures. Now checks the VM launch status conditionally against the bit support status so as to make test results consistent with the change enforced by KVM.
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> Link: https://lore.kernel.org/r/20230913235006.74172-4-weijiang.yang@intel.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
0903962d |
| 13-Sep-2023 |
Yang Weijiang <weijiang.yang@intel.com> |
nVMX: Rename union vmx_basic and related global variable
vmx_basic is somewhat confusing with exit_reason.basic, rename the former definition to vmx_basic_msr and the global variable to make them se
nVMX: Rename union vmx_basic and related global variable
vmx_basic is somewhat confusing with exit_reason.basic, rename the former definition to vmx_basic_msr and the global variable to make them self- descriptive.
No functional change intended.
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com> Link: https://lore.kernel.org/r/20230913235006.74172-3-weijiang.yang@intel.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
e87078ea |
| 11-Sep-2023 |
Sean Christopherson <seanjc@google.com> |
nVMX: Add a testcase for running L2 with EP4TA that points at MMIO
Add a testcase in test_ept_eptp() to verify that KVM doesn't inject a triple fault (or any other unexpected "error") if L1 runs L2
nVMX: Add a testcase for running L2 with EP4TA that points at MMIO
Add a testcase in test_ept_eptp() to verify that KVM doesn't inject a triple fault (or any other unexpected "error") if L1 runs L2 with an EP4TA that points at MMIO memory. For a very, very long time, KVM synthesized a triple fault in response to a "legal-but-garbage" EP4TA _before_ completing emulation of nested VM-Enter, which is architectural wrong and triggered various warnings in KVM.
Use the TPM base address for the MMIO backing, as KVM doesn't emulate a TPM (in-kernel) and practically the address is guaranteed to be MMIO.
Drop the manual test for 4-level EPT support, as __setup_ept() performs said check/test.
Link: https://lore.kernel.org/all/20230729005200.1057358-6-seanjc@google.com Link: https://lore.kernel.org/r/20230911182013.333559-4-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
5676a865 |
| 11-Sep-2023 |
Sean Christopherson <seanjc@google.com> |
nVMX: Use setup_dummy_ept() to configure EPT for test_ept_eptp() test
Use setup_dummy_ept() instead of open coding a rough equivalent in test_ept_eptp().
Link: https://lore.kernel.org/r/20230911182
nVMX: Use setup_dummy_ept() to configure EPT for test_ept_eptp() test
Use setup_dummy_ept() instead of open coding a rough equivalent in test_ept_eptp().
Link: https://lore.kernel.org/r/20230911182013.333559-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
ee7c8d4e |
| 11-Sep-2023 |
Sean Christopherson <seanjc@google.com> |
nVMX: Use helpers to check for WB memtype and 4-level EPT support
Use is_ept_memtype_supported() and is_4_level_ept_supported() to check for basic EPT support instead of open coding checks on ept_vp
nVMX: Use helpers to check for WB memtype and 4-level EPT support
Use is_ept_memtype_supported() and is_4_level_ept_supported() to check for basic EPT support instead of open coding checks on ept_vpid.val.
Opportunstically add a report() failure if 4-level EPT isn't supported, as support for 4-level paging is mandatory in any sane configuration.
Link: https://lore.kernel.org/r/20230911182013.333559-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
d4ae0a71 |
| 09-Jan-2024 |
Thomas Huth <thuth@redhat.com> |
x86: Fix various typos
Fix typos that have been discovered with the "codespell" utility.
Message-ID: <20240109132902.129377-1-thuth@redhat.com> Signed-off-by: Thomas Huth <thuth@redhat.com>
|
#
cd5f2fb4 |
| 20-Sep-2023 |
Paolo Bonzini <pbonzini@redhat.com> |
Merge tag 'kvm-x86-2023.09.01' of https://github.com/kvm-x86/kvm-unit-tests into HEAD
x86 fixes, cleanups, and new testcases, and a few generic changes
- Fix a bug in runtime.bash that caused it t
Merge tag 'kvm-x86-2023.09.01' of https://github.com/kvm-x86/kvm-unit-tests into HEAD
x86 fixes, cleanups, and new testcases, and a few generic changes
- Fix a bug in runtime.bash that caused it to mishandle "check" strings with multiple entries, e.g. a test that depends on multiple module params - Make the PMU tests depend on vPMU support being enabled in KVM - Fix PMU's forced emulation test on CPUs with full-width writes - Add a PMU testcase for measuring TSX transactional cycles - Nested SVM testcase for virtual NMIs - Move a pile of code to ASM_TRY() and "safe" helpers - Set up the guest stack in the LBRV tests so that the tests don't fail if the compiler decides to generate function calls in guest code - Ignore the "mispredict" flag in nSVM's LBRV tests to fix false failures - Clean up usage of helpers that disable interrupts, e.g. stop inserting unnecessary nops - Add helpers to dedup code for programming the APIC timer - Fix a variety of bugs in nVMX testcases related to being a 64-bit host
show more ...
|
#
d4fba74a |
| 01-Sep-2023 |
Sean Christopherson <seanjc@google.com> |
nVMX: Fix the noncanonical HOST_RIP testcase
Do a bare VMLAUNCH for the noncanonical HOST_RIP testcase to actually test the noncanonical RIP instead of the sane value written by vmlaunch(). Put up
nVMX: Fix the noncanonical HOST_RIP testcase
Do a bare VMLAUNCH for the noncanonical HOST_RIP testcase to actually test the noncanonical RIP instead of the sane value written by vmlaunch(). Put up a variety of warnings around test_vmx_vmlaunch_must_fail() to discourage improper usage.
Link: https://lore.kernel.org/r/20230901225004.3604702-8-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
1c56b3de |
| 01-Sep-2023 |
Sean Christopherson <seanjc@google.com> |
nVMX: Drop testcase that falsely claims to verify vmcs.HOST_RIP[63:32]
Excise the completely bogus testcase in test_host_addr_size() which purports to verify that setting vmcs.HOST_RIP[63:32] to non
nVMX: Drop testcase that falsely claims to verify vmcs.HOST_RIP[63:32]
Excise the completely bogus testcase in test_host_addr_size() which purports to verify that setting vmcs.HOST_RIP[63:32] to non-zero values is allowed for 64-bit hosts. The testcase is mindbogglingly broken: setting arbitrary, single bits above bit 46 creates a noncanonical address, and setting arbitrary bits below bit 47 would send the test into the weeds as a "successful" VMLAUNCH generates a VM-Exit, i.e. would load the garbage RIP and immediately encounter a #PF.
The only reason the passes is because it does absolutely nothing useful: vmlaunch() unconditionally writes HOST_RIP before VMLAUNCH, because not jumping to a random RIP on a VM-Exit is mildly important.
Outright drop the testcase, trying to salvage anything from it would be a waste of time as simply running any 64-bit guest will generate a huge variety of RIPs with non-zero values in bits 63:32.
Link: https://lore.kernel.org/r/20230901225004.3604702-7-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
aa26659d |
| 01-Sep-2023 |
Sean Christopherson <seanjc@google.com> |
nVMX: Shuffle test_host_addr_size() tests to "restore" CR4 and RIP
Re-order the testcases in test_host_addr_size() to guarantee that the host CR4 and RIP values are either written or restored before
nVMX: Shuffle test_host_addr_size() tests to "restore" CR4 and RIP
Re-order the testcases in test_host_addr_size() to guarantee that the host CR4 and RIP values are either written or restored before each testcase. If a test fails unexpectedly, running with a test value from a previous testcase may cause all subsequent tests to also fail, e.g. if the CR4 PCIDE test fails, all of the RIP tests will fail because of the bad CR4.
This also "fixes" the noncanonical RIP testcase, as running with the bad CR4 setup by the !PAE testcase would mask a missed noncanonical check.
[sean: Surprise! The bad CR4 is indeed masking a bug. I'm leaving it for now and intentionally creating a failing testcase for a commit or two to highlight the importance of cleaning up after testcases, and isolating what is actually being tested.]
Link: https://lore.kernel.org/r/20230901225004.3604702-6-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
d96a37c2 |
| 01-Sep-2023 |
Sean Christopherson <seanjc@google.com> |
nVMX: Rename vmlaunch_succeeds() to vmlaunch()
Rename vmlaunch_succeeds() to just vmlaunch(), the "succeeds" postfix is misleading for any test that expects VMLAUNCH to _fail_ as it gives the false
nVMX: Rename vmlaunch_succeeds() to vmlaunch()
Rename vmlaunch_succeeds() to just vmlaunch(), the "succeeds" postfix is misleading for any test that expects VMLAUNCH to _fail_ as it gives the false impression that the helper expects VMLAUNCH to succeed.
Link: https://lore.kernel.org/r/20230901225004.3604702-5-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|