History log of /kvm-unit-tests/lib/x86/ (Results 76 – 100 of 414)
Revision Date Author Comments
(<<< Hide modified files)
(Show modified files >>>)
3f91493302-Nov-2022 Like Xu <likexu@tencent.com>

x86/pmu: Add helper to get fixed counter MSR index

Add a helper to get the index of a fixed counter instead of manually
calculating the index, a future patch will add more users of the fixed
counter

x86/pmu: Add helper to get fixed counter MSR index

Add a helper to get the index of a fixed counter instead of manually
calculating the index, a future patch will add more users of the fixed
counter MSRs.

No functional change intended.

Signed-off-by: Like Xu <likexu@tencent.com>
[sean: move to separate patch, write changelog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20221102225110.3023543-20-seanjc@google.com

show more ...

cda64e8002-Nov-2022 Like Xu <likexu@tencent.com>

x86/pmu: Track GP counter and event select base MSRs in pmu_caps

Snapshot the base MSRs for GP counters and event selects during pmu_init()
so that tests don't need to manually compute the bases.

S

x86/pmu: Track GP counter and event select base MSRs in pmu_caps

Snapshot the base MSRs for GP counters and event selects during pmu_init()
so that tests don't need to manually compute the bases.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Like Xu <likexu@tencent.com>
[sean: rename helpers to look more like macros, drop wrmsr wrappers]
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20221102225110.3023543-19-seanjc@google.com

show more ...

414ee7d102-Nov-2022 Sean Christopherson <seanjc@google.com>

x86/pmu: Drop wrappers that just passthrough pmu_caps fields

Drop wrappers that are and always will be pure passthroughs of pmu_caps
fields, e.g. the number of fixed/general_purpose counters can alw

x86/pmu: Drop wrappers that just passthrough pmu_caps fields

Drop wrappers that are and always will be pure passthroughs of pmu_caps
fields, e.g. the number of fixed/general_purpose counters can always be
determined during PMU initialization and doesn't need runtime logic.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20221102225110.3023543-18-seanjc@google.com

show more ...

f85e94a202-Nov-2022 Sean Christopherson <seanjc@google.com>

x86/pmu: Snapshot CPUID.0xA PMU capabilities during BSP initialization

Snapshot PMU info from CPUID.0xA into "struct pmu_caps pmu" during
pmu_init() instead of reading CPUID.0xA every time a test wa

x86/pmu: Snapshot CPUID.0xA PMU capabilities during BSP initialization

Snapshot PMU info from CPUID.0xA into "struct pmu_caps pmu" during
pmu_init() instead of reading CPUID.0xA every time a test wants to query
PMU capabilities. Using pmu_caps to track various properties will also
make it easier to hide the differences between AMD and Intel PMUs.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20221102225110.3023543-17-seanjc@google.com

show more ...

879e7f0702-Nov-2022 Like Xu <likexu@tencent.com>

x86/pmu: Snapshot PMU perf_capabilities during BSP initialization

Add a global "struct pmu_caps pmu" to snapshot PMU capabilities
during the final stages of BSP initialization. Use the new hooks to

x86/pmu: Snapshot PMU perf_capabilities during BSP initialization

Add a global "struct pmu_caps pmu" to snapshot PMU capabilities
during the final stages of BSP initialization. Use the new hooks to
snapshot PERF_CAPABILITIES instead of re-reading the MSR every time a
test wants to query capabilities. A software-defined struct will also
simplify extending support to AMD CPUs, as many of the differences
between AMD and Intel can be handled during pmu_init().

Init the PMU caps for all tests so that tests don't need to remember to
call pmu_init() before using any of the PMU helpers, e.g. the nVMX test
uses this_cpu_has_pmu(), which will be converted to rely on the global
struct in a future patch.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Like Xu <likexu@tencent.com>
[sean: reword changelog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20221102225110.3023543-16-seanjc@google.com

show more ...

d6d3a3bd02-Nov-2022 Sean Christopherson <seanjc@google.com>

x86: Add a helper for the BSP's final init sequence common to all flavors

Add bsp_rest_init() to dedup bringing up APs and doing SMP initialization
across 32-bit, 64-bit, and EFI flavors of KVM-unit

x86: Add a helper for the BSP's final init sequence common to all flavors

Add bsp_rest_init() to dedup bringing up APs and doing SMP initialization
across 32-bit, 64-bit, and EFI flavors of KVM-unit-tests. The common
bucket will also be used in future to patches to init things that aren't
SMP related and thus don't fit in smp_init(), e.g. PMU setup.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20221102225110.3023543-15-seanjc@google.com

show more ...

9f17508d02-Nov-2022 Like Xu <likexu@tencent.com>

x86/pmu: Add lib/x86/pmu.[c.h] and move common code to header files

Given all the PMU stuff coming in, we need e.g. lib/x86/pmu.h to hold all
of the hardware-defined stuff, e.g. #defines, accessors,

x86/pmu: Add lib/x86/pmu.[c.h] and move common code to header files

Given all the PMU stuff coming in, we need e.g. lib/x86/pmu.h to hold all
of the hardware-defined stuff, e.g. #defines, accessors, helpers and structs
that are dictated by hardware. This will greatly help with code reuse and
reduce unnecessary vm-exit.

Opportunistically move lbr msrs definition to header processor.h.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Like Xu <likexu@tencent.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20221102225110.3023543-14-seanjc@google.com

show more ...

85c2118102-Nov-2022 Like Xu <likexu@tencent.com>

x86/pmu: Update rdpmc testcase to cover #GP path

Specifying an unsupported PMC encoding will cause a #GP(0).

There are multiple reasons RDPMC can #GP, the one that is being relied
on to guarantee #

x86/pmu: Update rdpmc testcase to cover #GP path

Specifying an unsupported PMC encoding will cause a #GP(0).

There are multiple reasons RDPMC can #GP, the one that is being relied
on to guarantee #GP is specifically that the PMC is invalid. The most
extensible solution is to provide a safe variant.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Like Xu <likexu@tencent.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20221102225110.3023543-12-seanjc@google.com

show more ...

c3cde0a502-Nov-2022 Like Xu <likexu@tencent.com>

x86/pmu: Add PDCM check before accessing PERF_CAP register

On virtual platforms without PDCM support (e.g. AMD), #GP
failure on MSR_IA32_PERF_CAPABILITIES is completely avoidable.

Suggested-by: Sea

x86/pmu: Add PDCM check before accessing PERF_CAP register

On virtual platforms without PDCM support (e.g. AMD), #GP
failure on MSR_IA32_PERF_CAPABILITIES is completely avoidable.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Like Xu <likexu@tencent.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20221102225110.3023543-2-seanjc@google.com

show more ...

baf248c501-Oct-2022 Sean Christopherson <seanjc@google.com>

x86/apic: Add test to verify aliased xAPIC IDs both receive IPI

Verify that multiple vCPUs with the same physical xAPIC ID receive an
IPI sent to said ID. Note, on_cpu() maintains its own CPU=>ID m

x86/apic: Add test to verify aliased xAPIC IDs both receive IPI

Verify that multiple vCPUs with the same physical xAPIC ID receive an
IPI sent to said ID. Note, on_cpu() maintains its own CPU=>ID map and
is effectively unusuable after changing the xAPIC ID. Update each vCPU's
xAPIC ID from within the IRQ handler so as to avoid having to send
yet another IPI from vCPU0 to tell vCPU1 to update its ID.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20221001011301.2077437-10-seanjc@google.com

show more ...

694e59ba05-Oct-2022 Manali Shukla <manali.shukla@amd.com>

x86: nSVM: Move part of #NM test to exception test framework

Remove the boiler plate code for #NM test and move #NM exception test
into the exception test framework.

Keep the test case for the cond

x86: nSVM: Move part of #NM test to exception test framework

Remove the boiler plate code for #NM test and move #NM exception test
into the exception test framework.

Keep the test case for the condition where #NM exception is not
generated, but drop the #NM handler entirely and rely on an unexpected
exception being reported as such (the VMMCALL assertion would also fail).

Signed-off-by: Manali Shukla <manali.shukla@amd.com>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20221005235212.57836-9-seanjc@google.com

show more ...

5faf5f6005-Oct-2022 Sean Christopherson <seanjc@google.com>

nVMX: Move #OF test to generic exceptions test

Move the INTO=>#OF test, along with its more precise checking of the
exit interrupt info, to the generic nVMX exceptions test.

Move the helper that g

nVMX: Move #OF test to generic exceptions test

Move the INTO=>#OF test, along with its more precise checking of the
exit interrupt info, to the generic nVMX exceptions test.

Move the helper that generates #OF to processor.h so that it can be
reused by nSVM for an identical test.

Note, this effectively adds new checks for all other vectors, i.e.
affects more vectors than just #OF.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20221005235212.57836-4-seanjc@google.com

show more ...

e39bee8f05-Oct-2022 Sean Christopherson <seanjc@google.com>

x86: Move helpers to generate misc exceptions to processor.h

Move nested VMX's helpers to generate miscellaenous exceptions, e.g. #DE,
to processor.h so that they can be used for nearly-identical ne

x86: Move helpers to generate misc exceptions to processor.h

Move nested VMX's helpers to generate miscellaenous exceptions, e.g. #DE,
to processor.h so that they can be used for nearly-identical nested SVM
tests.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20221005235212.57836-3-seanjc@google.com

show more ...

57d8877830-Sep-2022 Sean Christopherson <seanjc@google.com>

x86: Handle all known exceptions with ASM_TRY()

Install the ASM_TRY() exception handler for all known exception vectors
so that ASM_TRY() can be used for other exceptions, e.g. #PF. ASM_TRY()
might

x86: Handle all known exceptions with ASM_TRY()

Install the ASM_TRY() exception handler for all known exception vectors
so that ASM_TRY() can be used for other exceptions, e.g. #PF. ASM_TRY()
might not Just Work in all cases, but there's no good reason to limit
usage to just #DE, #UD, and #GP.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220930232450.1677811-2-seanjc@google.com

show more ...

350e77c323-Aug-2022 Vasant Karasulli <vkarasulli@suse.de>

x86: efi: set up the IDT before accessing MSRs.

Reading or writing MSR_IA32_APICBASE is typically an intercepted
operation and causes #VC exception when the test is launched as
an SEV-ES guest.

So

x86: efi: set up the IDT before accessing MSRs.

Reading or writing MSR_IA32_APICBASE is typically an intercepted
operation and causes #VC exception when the test is launched as
an SEV-ES guest.

So calling pre_boot_apic_id() and reset_apic() before the IDT is
set up in setup_idt() and load_idt() might cause problems.

Hence move percpu data setup and reset_apic() call after
setup_idt() and load_idt().

Fixes: 3c50214c97f173f5e0f82c7f248a7c62707d8748 (x86: efi: Provide percpu storage)
Signed-off-by: Vasant Karasulli <vkarasulli@suse.de>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/r/20220823094328.8458-1-vkarasulli@suse.de
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

7948d4b608-Aug-2022 Sean Christopherson <seanjc@google.com>

x86: Add helper to detect if forced emulation prefix is available

Add a helper to detect whether or not KVM's forced emulation prefix is
available. Use the helper to replace equivalent functionalit

x86: Add helper to detect if forced emulation prefix is available

Add a helper to detect whether or not KVM's forced emulation prefix is
available. Use the helper to replace equivalent functionality in the
emulator test.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220808164707.537067-5-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...

dfb0ec0f08-Aug-2022 Michal Luczaj <mhal@rbox.co>

x86: Introduce ASM_TRY_FEP() to handle exceptions on forced emulation

Introduce ASM_TRY_FEP() to allow using the try-catch method to handle
exceptions that occur on forced emulation. ASM_TRY() mish

x86: Introduce ASM_TRY_FEP() to handle exceptions on forced emulation

Introduce ASM_TRY_FEP() to allow using the try-catch method to handle
exceptions that occur on forced emulation. ASM_TRY() mishandles
exceptions thrown by the forced-emulation-triggered emulator. While the
faulting address stored in the exception table points at forced emulation
prefix, when an exceptions comes, RIP is 5 bytes (size of KVM_FEP) ahead
due to KVM advancing RIP to skip the prefix and the exception ends up
unhandled.

Signed-off-by: Michal Luczaj <mhal@rbox.co>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220808164707.537067-4-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...

6a29a0a108-Aug-2022 Sean Christopherson <seanjc@google.com>

x86: Dedup 32-bit vs. 64-bit ASM_TRY() by stealing kernel's __ASM_SEL()

Steal the kernel's __ASM_SEL() implementation and use it to consolidate
ASM_TRY(). The only difference between the 32-bit and

x86: Dedup 32-bit vs. 64-bit ASM_TRY() by stealing kernel's __ASM_SEL()

Steal the kernel's __ASM_SEL() implementation and use it to consolidate
ASM_TRY(). The only difference between the 32-bit and 64-bit versions is
the size of the address stored in the table.

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220808164707.537067-3-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...

a106b30d26-Jul-2022 Paolo Bonzini <pbonzini@redhat.com>

x86: add and use *_BIT constants for CR0, CR4, EFLAGS

The "BIT" macro cannot be used in top-level assembly statements
(it can be used in functions through the "i" constraint), because
old binutils s

x86: add and use *_BIT constants for CR0, CR4, EFLAGS

The "BIT" macro cannot be used in top-level assembly statements
(it can be used in functions through the "i" constraint), because
old binutils such as the one in CentOS 7 do not support the "1UL"
syntax for numerals.

To avoid having to hard-code EFLAGS.AC being bit 18, define the constants
for CR0, CR4 and EFLAGS bits in terms of new macros for just the bit
number.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...

c2f434b126-Jul-2022 Paolo Bonzini <pbonzini@redhat.com>

x86: smp: fix 32-bit build

On macOS the 32-bit build gives the following warning:

lib/x86/smp.c:89:29: error: format '%d' expects argument of type 'int', but argument 2 has type 'uint32_t' {aka 'lo

x86: smp: fix 32-bit build

On macOS the 32-bit build gives the following warning:

lib/x86/smp.c:89:29: error: format '%d' expects argument of type 'int', but argument 2 has type 'uint32_t' {aka 'long unsigned int'} [-Werror=format=]
89 | printf("setup: CPU %d online\n", apic_id());
| ~^ ~~~~~~~~~
| | |
| int uint32_t {aka long unsigned int}
| %ld

Fix by using the inttypes.h printf formats.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

show more ...

9d9000a511-Jul-2022 Yang Weijiang <weijiang.yang@intel.com>

x86: Check platform pmu capabilities before run lbr tests

Use new helper to check whether pmu is available and Perfmon/Debug
capbilities are supported before read MSR_IA32_PERF_CAPABILITIES to
avoid

x86: Check platform pmu capabilities before run lbr tests

Use new helper to check whether pmu is available and Perfmon/Debug
capbilities are supported before read MSR_IA32_PERF_CAPABILITIES to
avoid test failure. The issue can be captured when enable_pmu=0.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Link: https://lore.kernel.org/r/20220711041841.126648-5-weijiang.yang@intel.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

2b4c8e5011-Jul-2022 Yang Weijiang <weijiang.yang@intel.com>

x86: Skip perf related tests when platform cannot support

Add helpers to check whether MSR_CORE_PERF_GLOBAL_CTRL and rdpmc are
supported in KVM. When pmu is disabled with enable_pmu=0, reading
MSR_C

x86: Skip perf related tests when platform cannot support

Add helpers to check whether MSR_CORE_PERF_GLOBAL_CTRL and rdpmc are
supported in KVM. When pmu is disabled with enable_pmu=0, reading
MSR_CORE_PERF_GLOBAL_CTRL or executing rdpmc leads to #GP, so skip
related tests in this case to avoid test failure.

Opportunistically hoist mwait support check function as helper and
change related code.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Link: https://lore.kernel.org/r/20220711041841.126648-4-weijiang.yang@intel.com
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

2719b92c11-Jul-2022 Yang Weijiang <weijiang.yang@intel.com>

x86: Use helpers to fetch supported perf capabilities

Add helpers to query PMU info from CPUID(0xA) and use them instead of
caching the information in global (to the PMU test) unions. Other tests
c

x86: Use helpers to fetch supported perf capabilities

Add helpers to query PMU info from CPUID(0xA) and use them instead of
caching the information in global (to the PMU test) unions. Other tests
can also use the helpers to check PMU capabilities.

No functional change intended.

Signed-off-by: Yang Weijiang <weijiang.yang@intel.com>
Link: https://lore.kernel.org/r/20220711041841.126648-3-weijiang.yang@intel.com
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

45472bc521-Jul-2022 Sean Christopherson <seanjc@google.com>

nVMX: Move wrappers of this_cpu_has() to nVMX's VM-Exit test

Move wrappers of this_cpu_has() whose sole purpose is to be queried as a
callback in VM-Exit tests into vmxexit.c in order to discourage

nVMX: Move wrappers of this_cpu_has() to nVMX's VM-Exit test

Move wrappers of this_cpu_has() whose sole purpose is to be queried as a
callback in VM-Exit tests into vmxexit.c in order to discourage general
use, i.e. force tests to use this_cpu_has().

No functional change intended.

Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

816c035921-Jul-2022 Sean Christopherson <seanjc@google.com>

x86: Drop cpuid_osxsave(), just use this_cpu_has(X86_FEATURE_OSXSAVE)

Drop cpuid_osxsave(), which is just an open coded implementation of
this_cpu_has(X86_FEATURE_OSXSAVE).

Signed-off-by: Sean Chri

x86: Drop cpuid_osxsave(), just use this_cpu_has(X86_FEATURE_OSXSAVE)

Drop cpuid_osxsave(), which is just an open coded implementation of
this_cpu_has(X86_FEATURE_OSXSAVE).

Signed-off-by: Sean Christopherson <seanjc@google.com>

show more ...

12345678910>>...17