#
dca3f4c0 |
| 24-Feb-2025 |
Paolo Bonzini <pbonzini@redhat.com> |
Merge tag 'kvm-x86-2025.02.21' of https://github.com/kvm-x86/kvm-unit-tests into HEAD
KVM-Unit-Tests x86 changes:
- Expand the per-CPU data+stack area to 12KiB per CPU to reduce the probability
Merge tag 'kvm-x86-2025.02.21' of https://github.com/kvm-x86/kvm-unit-tests into HEAD
KVM-Unit-Tests x86 changes:
- Expand the per-CPU data+stack area to 12KiB per CPU to reduce the probability of tests overflowing their stack and clobbering pre-CPU data.
- Add testcases for LA57 canonical checks.
- Add testcases for LAM.
- Add a smoke test to make sure KVM doesn't bleed split-lock #AC/#DB into the guest.
- Fix many warts and bugs in the PMU test, and prepare it for PMU version 5 and beyond.
- Many misc fixes and cleanups.
show more ...
|
#
8d1acfe4 |
| 15-Feb-2025 |
Xiong Zhang <xiong.y.zhang@intel.com> |
x86: pmu: Remove duplicate code in pmu_init()
There are totally same code in pmu_init() helper, remove the duplicate code.
Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Xiong Zhang
x86: pmu: Remove duplicate code in pmu_init()
There are totally same code in pmu_init() helper, remove the duplicate code.
Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Reviewed-by: Mingwei Zhang <mizhang@google.com> Link: https://lore.kernel.org/r/20250215013636.1214612-2-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com>
show more ...
|
#
952cf19c |
| 02-Nov-2022 |
Like Xu <likexu@tencent.com> |
x86/pmu: Add AMD Guest PerfMonV2 testcases
Updated test cases to cover KVM enabling code for AMD Guest PerfMonV2.
The Intel-specific PMU helpers were added to check for AMD cpuid, and some of the s
x86/pmu: Add AMD Guest PerfMonV2 testcases
Updated test cases to cover KVM enabling code for AMD Guest PerfMonV2.
The Intel-specific PMU helpers were added to check for AMD cpuid, and some of the same semantics of MSRs were assigned during the initialization phase. The vast majority of pmu test cases are reused seamlessly.
On some x86 machines (AMD only), even with retired events, the same workload is measured repeatedly and the number of events collected is erratic, which essentially reflects the details of hardware implementation, and from a software perspective, the type of event is an unprecise event, which brings a tolerance check in the counter overflow testcases.
Signed-off-by: Like Xu <likexu@tencent.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-28-seanjc@google.com
show more ...
|
#
b883751a |
| 02-Nov-2022 |
Like Xu <likexu@tencent.com> |
x86/pmu: Update testcases to cover AMD PMU
AMD core PMU before Zen4 did not have version numbers, there were no fixed counters, it had a hard-coded number of generic counters, bit-width, and only ha
x86/pmu: Update testcases to cover AMD PMU
AMD core PMU before Zen4 did not have version numbers, there were no fixed counters, it had a hard-coded number of generic counters, bit-width, and only hardware events common across amd generations (starting with K7) were added to amd_gp_events[] table.
All above differences are instantiated at the detection step, and it also covers the K7 PMU registers, which is consistent with bare-metal.
Cc: Sandipan Das <sandipan.das@amd.com> Signed-off-by: Like Xu <likexu@tencent.com> [sean: set bases to K7 values for !PERFCTR_CORE case (reported by Paolo)] Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-27-seanjc@google.com
show more ...
|
#
dd602b6f |
| 02-Nov-2022 |
Sean Christopherson <seanjc@google.com> |
x86/pmu: Add pmu_caps flag to track if CPU is Intel (versus AMD)
Add a flag to track whether the PMU is backed by an Intel CPU. Future support for AMD will sadly need to constantly check whether th
x86/pmu: Add pmu_caps flag to track if CPU is Intel (versus AMD)
Add a flag to track whether the PMU is backed by an Intel CPU. Future support for AMD will sadly need to constantly check whether the PMU is Intel or AMD, and invoking is_intel() every time is rather expensive due to it requiring CPUID (VM-Exit) and a string comparison.
Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-26-seanjc@google.com
show more ...
|
#
62ba5036 |
| 02-Nov-2022 |
Like Xu <likexu@tencent.com> |
x86/pmu: Add global helpers to cover Intel Arch PMU Version 1
To test Intel arch pmu version 1, most of the basic framework and use cases which test any PMU counter do not require any changes, excep
x86/pmu: Add global helpers to cover Intel Arch PMU Version 1
To test Intel arch pmu version 1, most of the basic framework and use cases which test any PMU counter do not require any changes, except no access to registers introduced only in PMU version 2.
Adding some guardian's checks can seamlessly support version 1, while opening the door for normal AMD PMUs tests.
Signed-off-by: Like Xu <likexu@tencent.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-24-seanjc@google.com
show more ...
|
#
8a2866d1 |
| 02-Nov-2022 |
Like Xu <likexu@tencent.com> |
x86/pmu: Track global status/control/clear MSRs in pmu_caps
Track the global PMU MSRs in pmu_caps so that tests don't need to manually differntiate between AMD and Intel. Although AMD and Intel PMU
x86/pmu: Track global status/control/clear MSRs in pmu_caps
Track the global PMU MSRs in pmu_caps so that tests don't need to manually differntiate between AMD and Intel. Although AMD and Intel PMUs have the same semantics in terms of global control features (including ctl and status), their MSR indexes are not the same
Signed-off-by: Like Xu <likexu@tencent.com> [sean: drop most getters/setters] Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-22-seanjc@google.com
show more ...
|
#
f33d3946 |
| 02-Nov-2022 |
Sean Christopherson <seanjc@google.com> |
x86/pmu: Reset GP and Fixed counters during pmu_init().
In generic PMU testing, it is very common to initialize the test env by resetting counters registers. Add helpers to reset all PMU counters fo
x86/pmu: Reset GP and Fixed counters during pmu_init().
In generic PMU testing, it is very common to initialize the test env by resetting counters registers. Add helpers to reset all PMU counters for code reusability, and reset all counters during PMU initialization for good measure.
Signed-off-by: Like Xu <likexu@tencent.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-21-seanjc@google.com
show more ...
|
#
cda64e80 |
| 02-Nov-2022 |
Like Xu <likexu@tencent.com> |
x86/pmu: Track GP counter and event select base MSRs in pmu_caps
Snapshot the base MSRs for GP counters and event selects during pmu_init() so that tests don't need to manually compute the bases.
S
x86/pmu: Track GP counter and event select base MSRs in pmu_caps
Snapshot the base MSRs for GP counters and event selects during pmu_init() so that tests don't need to manually compute the bases.
Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Like Xu <likexu@tencent.com> [sean: rename helpers to look more like macros, drop wrmsr wrappers] Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-19-seanjc@google.com
show more ...
|
#
f85e94a2 |
| 02-Nov-2022 |
Sean Christopherson <seanjc@google.com> |
x86/pmu: Snapshot CPUID.0xA PMU capabilities during BSP initialization
Snapshot PMU info from CPUID.0xA into "struct pmu_caps pmu" during pmu_init() instead of reading CPUID.0xA every time a test wa
x86/pmu: Snapshot CPUID.0xA PMU capabilities during BSP initialization
Snapshot PMU info from CPUID.0xA into "struct pmu_caps pmu" during pmu_init() instead of reading CPUID.0xA every time a test wants to query PMU capabilities. Using pmu_caps to track various properties will also make it easier to hide the differences between AMD and Intel PMUs.
Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-17-seanjc@google.com
show more ...
|
#
879e7f07 |
| 02-Nov-2022 |
Like Xu <likexu@tencent.com> |
x86/pmu: Snapshot PMU perf_capabilities during BSP initialization
Add a global "struct pmu_caps pmu" to snapshot PMU capabilities during the final stages of BSP initialization. Use the new hooks to
x86/pmu: Snapshot PMU perf_capabilities during BSP initialization
Add a global "struct pmu_caps pmu" to snapshot PMU capabilities during the final stages of BSP initialization. Use the new hooks to snapshot PERF_CAPABILITIES instead of re-reading the MSR every time a test wants to query capabilities. A software-defined struct will also simplify extending support to AMD CPUs, as many of the differences between AMD and Intel can be handled during pmu_init().
Init the PMU caps for all tests so that tests don't need to remember to call pmu_init() before using any of the PMU helpers, e.g. the nVMX test uses this_cpu_has_pmu(), which will be converted to rely on the global struct in a future patch.
Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Like Xu <likexu@tencent.com> [sean: reword changelog] Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-16-seanjc@google.com
show more ...
|
#
9f17508d |
| 02-Nov-2022 |
Like Xu <likexu@tencent.com> |
x86/pmu: Add lib/x86/pmu.[c.h] and move common code to header files
Given all the PMU stuff coming in, we need e.g. lib/x86/pmu.h to hold all of the hardware-defined stuff, e.g. #defines, accessors,
x86/pmu: Add lib/x86/pmu.[c.h] and move common code to header files
Given all the PMU stuff coming in, we need e.g. lib/x86/pmu.h to hold all of the hardware-defined stuff, e.g. #defines, accessors, helpers and structs that are dictated by hardware. This will greatly help with code reuse and reduce unnecessary vm-exit.
Opportunistically move lbr msrs definition to header processor.h.
Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Like Xu <likexu@tencent.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221102225110.3023543-14-seanjc@google.com
show more ...
|