#
dcec966f |
| 20-Jun-2024 |
Paolo Bonzini <pbonzini@redhat.com> |
Merge tag 'kvm-x86-2024.06.14' of https://github.com/kvm-x86/kvm-unit-tests into HEAD
x86 fixes, cleanups, and new testcases:
- Add a testcase to verify that KVM doesn't inject a triple fault (or
Merge tag 'kvm-x86-2024.06.14' of https://github.com/kvm-x86/kvm-unit-tests into HEAD
x86 fixes, cleanups, and new testcases:
- Add a testcase to verify that KVM doesn't inject a triple fault (or any other "error") if a nested VM is run with an EP4TA pointing MMIO.
- Play nice with CR4.CET in test_vmxon_bad_cr()
- Force emulation when testing MSR_IA32_FLUSH_CMD to workaround an issue where Skylake CPUs don't follow the architecturally defined behavior, and so that the test doesn't break if/when new bits are supported by future CPUs.
- Rework the async #PF test to support IRQ-based page-ready notifications.
- Fix a variety of issues related to adaptive PEBS.
- Add several nested VMX tests for virtual interrupt delivery and posted interrupts.
- Ensure PAT is loaded with the default value after the nVMX PAT tests (failure to do so was causing tests to fail due to all memory being UC).
- Misc cleanups.
show more ...
|
#
fc17d527 |
| 06-Mar-2024 |
Sean Christopherson <seanjc@google.com> |
x86/pmu: Iterate over adaptive PEBS flag combinations
Iterate over all possible combinations of adaptive PEBS flags, instead of simply testing each flag individually. There are currently only 16 po
x86/pmu: Iterate over adaptive PEBS flag combinations
Iterate over all possible combinations of adaptive PEBS flags, instead of simply testing each flag individually. There are currently only 16 possible combinations, i.e. there's no reason not to exhaustively test every one.
Opportunistically rename PEBS_DATACFG_GP to PEBS_DATACFG_GPRS to differentiate it from general purposes *counters*, which KVM also tends to abbreviate as "GP".
Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Link: https://lore.kernel.org/r/20240306230153.786365-3-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
b883751a |
| 02-Nov-2022 |
Like Xu <likexu@tencent.com> |
x86/pmu: Update testcases to cover AMD PMU
AMD core PMU before Zen4 did not have version numbers, there were no fixed counters, it had a hard-coded number of generic counters, bit-width, and only ha
x86/pmu: Update testcases to cover AMD PMU
AMD core PMU before Zen4 did not have version numbers, there were no fixed counters, it had a hard-coded number of generic counters, bit-width, and only hardware events common across amd generations (starting with K7) were added to amd_gp_events[] table.
All above differences are instantiated at the detection step, and it also covers the K7 PMU registers, which is consistent with bare-metal.
Cc: Sandipan Das <sandipan.das@amd.com> Signed-off-by: Like Xu <likexu@tencent.com> [sean: set bases to K7 values for !PERFCTR_CORE case (reported by Paolo)] Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-27-seanjc@google.com
show more ...
|
#
dd602b6f |
| 02-Nov-2022 |
Sean Christopherson <seanjc@google.com> |
x86/pmu: Add pmu_caps flag to track if CPU is Intel (versus AMD)
Add a flag to track whether the PMU is backed by an Intel CPU. Future support for AMD will sadly need to constantly check whether th
x86/pmu: Add pmu_caps flag to track if CPU is Intel (versus AMD)
Add a flag to track whether the PMU is backed by an Intel CPU. Future support for AMD will sadly need to constantly check whether the PMU is Intel or AMD, and invoking is_intel() every time is rather expensive due to it requiring CPUID (VM-Exit) and a string comparison.
Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-26-seanjc@google.com
show more ...
|
#
62ba5036 |
| 02-Nov-2022 |
Like Xu <likexu@tencent.com> |
x86/pmu: Add global helpers to cover Intel Arch PMU Version 1
To test Intel arch pmu version 1, most of the basic framework and use cases which test any PMU counter do not require any changes, excep
x86/pmu: Add global helpers to cover Intel Arch PMU Version 1
To test Intel arch pmu version 1, most of the basic framework and use cases which test any PMU counter do not require any changes, except no access to registers introduced only in PMU version 2.
Adding some guardian's checks can seamlessly support version 1, while opening the door for normal AMD PMUs tests.
Signed-off-by: Like Xu <likexu@tencent.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-24-seanjc@google.com
show more ...
|
#
2ae41f5d |
| 02-Nov-2022 |
Like Xu <likexu@tencent.com> |
x86: Add tests for Guest Processor Event Based Sampling (PEBS)
This unit-test is intended to test the KVM's support for the Processor Event Based Sampling (PEBS) which is another PMU feature on Inte
x86: Add tests for Guest Processor Event Based Sampling (PEBS)
This unit-test is intended to test the KVM's support for the Processor Event Based Sampling (PEBS) which is another PMU feature on Intel processors (start from Ice Lake Server).
If a bit in PEBS_ENABLE is set to 1, its corresponding counter will write at least one PEBS records (including partial state of the vcpu at the time of the current hardware event) to the guest memory on counter overflow, and trigger an interrupt at a specific DS state. The format of a PEBS record can be configured by another register.
These tests cover most usage scenarios, for example there are some specially constructed scenarios (not a typical behaviour of Linux PEBS driver). It lowers the threshold for others to understand this feature and opens up more exploration of KVM implementation or hw feature itself.
Signed-off-by: Like Xu <likexu@tencent.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-23-seanjc@google.com
show more ...
|
#
8a2866d1 |
| 02-Nov-2022 |
Like Xu <likexu@tencent.com> |
x86/pmu: Track global status/control/clear MSRs in pmu_caps
Track the global PMU MSRs in pmu_caps so that tests don't need to manually differntiate between AMD and Intel. Although AMD and Intel PMU
x86/pmu: Track global status/control/clear MSRs in pmu_caps
Track the global PMU MSRs in pmu_caps so that tests don't need to manually differntiate between AMD and Intel. Although AMD and Intel PMUs have the same semantics in terms of global control features (including ctl and status), their MSR indexes are not the same
Signed-off-by: Like Xu <likexu@tencent.com> [sean: drop most getters/setters] Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-22-seanjc@google.com
show more ...
|
#
f33d3946 |
| 02-Nov-2022 |
Sean Christopherson <seanjc@google.com> |
x86/pmu: Reset GP and Fixed counters during pmu_init().
In generic PMU testing, it is very common to initialize the test env by resetting counters registers. Add helpers to reset all PMU counters fo
x86/pmu: Reset GP and Fixed counters during pmu_init().
In generic PMU testing, it is very common to initialize the test env by resetting counters registers. Add helpers to reset all PMU counters for code reusability, and reset all counters during PMU initialization for good measure.
Signed-off-by: Like Xu <likexu@tencent.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-21-seanjc@google.com
show more ...
|
#
3f914933 |
| 02-Nov-2022 |
Like Xu <likexu@tencent.com> |
x86/pmu: Add helper to get fixed counter MSR index
Add a helper to get the index of a fixed counter instead of manually calculating the index, a future patch will add more users of the fixed counter
x86/pmu: Add helper to get fixed counter MSR index
Add a helper to get the index of a fixed counter instead of manually calculating the index, a future patch will add more users of the fixed counter MSRs.
No functional change intended.
Signed-off-by: Like Xu <likexu@tencent.com> [sean: move to separate patch, write changelog] Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-20-seanjc@google.com
show more ...
|
#
cda64e80 |
| 02-Nov-2022 |
Like Xu <likexu@tencent.com> |
x86/pmu: Track GP counter and event select base MSRs in pmu_caps
Snapshot the base MSRs for GP counters and event selects during pmu_init() so that tests don't need to manually compute the bases.
S
x86/pmu: Track GP counter and event select base MSRs in pmu_caps
Snapshot the base MSRs for GP counters and event selects during pmu_init() so that tests don't need to manually compute the bases.
Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Like Xu <likexu@tencent.com> [sean: rename helpers to look more like macros, drop wrmsr wrappers] Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-19-seanjc@google.com
show more ...
|
#
414ee7d1 |
| 02-Nov-2022 |
Sean Christopherson <seanjc@google.com> |
x86/pmu: Drop wrappers that just passthrough pmu_caps fields
Drop wrappers that are and always will be pure passthroughs of pmu_caps fields, e.g. the number of fixed/general_purpose counters can alw
x86/pmu: Drop wrappers that just passthrough pmu_caps fields
Drop wrappers that are and always will be pure passthroughs of pmu_caps fields, e.g. the number of fixed/general_purpose counters can always be determined during PMU initialization and doesn't need runtime logic.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-18-seanjc@google.com
show more ...
|
#
f85e94a2 |
| 02-Nov-2022 |
Sean Christopherson <seanjc@google.com> |
x86/pmu: Snapshot CPUID.0xA PMU capabilities during BSP initialization
Snapshot PMU info from CPUID.0xA into "struct pmu_caps pmu" during pmu_init() instead of reading CPUID.0xA every time a test wa
x86/pmu: Snapshot CPUID.0xA PMU capabilities during BSP initialization
Snapshot PMU info from CPUID.0xA into "struct pmu_caps pmu" during pmu_init() instead of reading CPUID.0xA every time a test wants to query PMU capabilities. Using pmu_caps to track various properties will also make it easier to hide the differences between AMD and Intel PMUs.
Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-17-seanjc@google.com
show more ...
|
#
879e7f07 |
| 02-Nov-2022 |
Like Xu <likexu@tencent.com> |
x86/pmu: Snapshot PMU perf_capabilities during BSP initialization
Add a global "struct pmu_caps pmu" to snapshot PMU capabilities during the final stages of BSP initialization. Use the new hooks to
x86/pmu: Snapshot PMU perf_capabilities during BSP initialization
Add a global "struct pmu_caps pmu" to snapshot PMU capabilities during the final stages of BSP initialization. Use the new hooks to snapshot PERF_CAPABILITIES instead of re-reading the MSR every time a test wants to query capabilities. A software-defined struct will also simplify extending support to AMD CPUs, as many of the differences between AMD and Intel can be handled during pmu_init().
Init the PMU caps for all tests so that tests don't need to remember to call pmu_init() before using any of the PMU helpers, e.g. the nVMX test uses this_cpu_has_pmu(), which will be converted to rely on the global struct in a future patch.
Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Like Xu <likexu@tencent.com> [sean: reword changelog] Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-16-seanjc@google.com
show more ...
|
#
9f17508d |
| 02-Nov-2022 |
Like Xu <likexu@tencent.com> |
x86/pmu: Add lib/x86/pmu.[c.h] and move common code to header files
Given all the PMU stuff coming in, we need e.g. lib/x86/pmu.h to hold all of the hardware-defined stuff, e.g. #defines, accessors,
x86/pmu: Add lib/x86/pmu.[c.h] and move common code to header files
Given all the PMU stuff coming in, we need e.g. lib/x86/pmu.h to hold all of the hardware-defined stuff, e.g. #defines, accessors, helpers and structs that are dictated by hardware. This will greatly help with code reuse and reduce unnecessary vm-exit.
Opportunistically move lbr msrs definition to header processor.h.
Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Like Xu <likexu@tencent.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-14-seanjc@google.com
show more ...
|